{"id":1858,"date":"2025-11-13T08:18:08","date_gmt":"2025-11-13T08:18:08","guid":{"rendered":"https:\/\/pacific.ai\/staging\/3667\/?p=1858"},"modified":"2026-03-16T11:00:29","modified_gmt":"2026-03-16T11:00:29","slug":"ai-regulations-in-the-us","status":"publish","type":"post","link":"https:\/\/pacific.ai\/staging\/3667\/ai-regulations-in-the-us\/","title":{"rendered":"AI Regulations in the US"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div><p>US AI regulation importance cannot be overstated for the entire industry and its global dominance. It affects both domestic and international markets. Currently, US AI oversight is fragmented and sector-specific patchwork rather than a comprehensive federal law. The FDA authorizes medical devices, HHS enforces HIPAA, and the FTC targets algorithmic bias through Operation AI Comply. With 1,247 AI devices authorized and $145 million in penalties collected, organizations face enforcement now, not later. Understanding this multi-agency landscape is critical for compliance and competitive advantage. [<a href=\"https:\/\/www.quinnemanuel.com\/the-firm\/publications\/when-machines-discriminate-the-rise-of-ai-bias-lawsuits\/\" target=\"_blank\" rel=\"noopener\">1<\/a>, <a href=\"https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-software-medical-device\" target=\"_blank\" rel=\"noopener\">4<\/a>]<\/p>\n<h2>Why AI Regulation Matters in the United States<\/h2>\n<p>AI systems now make consequential decisions affecting patient diagnoses, loan approvals, hiring outcomes, and insurance coverage. When these systems fail or discriminate, the harm extends beyond individual cases to undermine trust in entire industries.<\/p>\n<p><b>Bias in AI systems represents a persistent challenge.<\/b> Researchers continue to document how AI models can reflect and amplify societal biases related to gender, race, disability, and socioeconomic status.<a href=\"https:\/\/pacific.ai\/staging\/3667\/unveiling-bias-in-language-models-gender-race-disability-and-socioeconomic-perspectives\/\"> Unveiling Bias in Language Models<\/a> demonstrates how these biases manifest across demographic dimensions, creating disparate impacts that violate civil rights protections.<\/p>\n<p><b>Enforcement actions signal regulatory priorities.<\/b> The FTC launched Operation AI Comply in September 2024, announcing five enforcement actions against companies using AI deceptively. HHS has collected nearly $145 million in HIPAA penalties since 2003 across 152 enforcement actions. [<a href=\"https:\/\/www.quinnemanuel.com\/the-firm\/publications\/when-machines-discriminate-the-rise-of-ai-bias-lawsuits\/\" target=\"_blank\" rel=\"noopener\">1<\/a>]<\/p>\n<p>Ethical AI practices require more than voluntary commitments. Responsible AI development must be codified into enforceable standards that protect individuals while enabling innovation. The question is not whether to regulate AI, but how to do so effectively across a complex, multi-stakeholder ecosystem.<\/p>\n<h2>Current Regulatory Landscape: Fragmented but Evolving<\/h2>\n<p>The US AI regulation landscape is characterized by a series of sector-specific rules, rather than a unified federal legislative framework. Multiple agencies share fragmented AI oversight and responsibilities. The FDA regulates medical devices, HHS enforces healthcare privacy rules, the FTC monitors consumer protection, the EEOC addresses employment discrimination, and NIST develops technical standards. Each agency applies existing statutory authorities to AI within its domain.<\/p>\n<figure class=\"mb50 tac\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1865 size-full\" src=\"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/AI-regulations-in-the-US.webp\" alt=\"A structured infographic summarizing six key themes in US AI regulation: safety &amp; security, transparency, industry commitments, export controls, and the balance of federal vs. state roles.\" width=\"1280\" height=\"671\" \/><\/figure>\n<h3>State Law Fragmentation<\/h3>\n<p><b>Key state requirements create additional compliance:<\/b><\/p>\n<ul>\n<li><b>Colorado:<\/b> AI Act (SB 24-205) effective June 30, 2026, mandates risk management for high-risk systems.<\/li>\n<li><b>California:<\/b> CCPA\/CPRA requires transparency for automated decision-making.<\/li>\n<li><b>Illinois:<\/b> BIPA regulates biometric data collection and processing.<\/li>\n<li><b>New York City: <\/b>Local Law 144 mandates bias audits for employment AI tools.<\/li>\n<\/ul>\n<h3>Governance Gap and Need for Unified Frameworks<\/h3>\n<p>Congress continues considering comprehensive AI legislation while federal agencies coordinate approaches. Organizations face overlapping jurisdiction with no single authority providing holistic AI governance in the US. Companies require <a href=\"https:\/\/pacific.ai\/staging\/3667\/ai-policies\/\">AI governance platform<\/a> to map systems across jurisdictions and track regulatory changes. Manual compliance tracking becomes untenable as AI deployments scale. At this stage, <a title=\"ai governance implementation\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-governance-implementation\/\">AI governance implementation<\/a> becomes critical for translating fragmented regulatory requirements into consistent internal processes that can be applied across teams, systems, and jurisdictions.<\/p>\n<h2>Key Federal Agencies Involved in AI Oversight<\/h2>\n<p>Several federal agencies exercise significant authority over AI systems, each bringing unique expertise and enforcement powers:<\/p>\n<h3>Food and Drug Administration (FDA)<\/h3>\n<p>The FDA uses a risk-based framework to regulate AI medical devices. Its 2021 Action Plan for AI\/ML Software as a Medical Device (SaMD) allows predetermined change control plans. These let algorithms update without new submissions if changes stay within pre-set limits. [<a href=\"https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-enabled-medical-devices\" target=\"_blank\" rel=\"noopener\">2<\/a>] Since 2025, it has centered around a comprehensive lifecycle-based regulatory framework. Led to the January 2025 <a href=\"https:\/\/www.fda.gov\/regulatory-information\/search-fda-guidance-documents\/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing\" target=\"_blank\" rel=\"noopener\">draft guidance titled &#8220;Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations,&#8221;<\/a> which outlines expectations for managing AI\/ML-enabled medical devices throughout their total product lifecycle (TPLC). By May 2025, the FDA had authorized 1,247 AI devices. This shows exponential growth from prior years. Healthcare systems show 88% AI adoption, but only 18% have mature governance. [<a href=\"https:\/\/www.hfma.org\/press-releases\/health-system-adoption-of-ai-outpaces-internal-governance-and-strategy\/\" target=\"_blank\" rel=\"noopener\">3<\/a>, <a href=\"https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-software-medical-device\" target=\"_blank\" rel=\"noopener\">4<\/a>]<\/p>\n<h3>Federal Trade Commission (FTC)<\/h3>\n<p>The FTC enforces consumer protection <a title=\"healthcare ai laws\" href=\"https:\/\/pacific.ai\/staging\/3667\/healthcare-ai-governance-a-review-of-evaluation-frameworks\/\">laws<\/a> against deceptive AI practices. <a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2024\/09\/ftc-announces-crackdown-deceptive-ai-claims-schemes\" target=\"_blank\" rel=\"noopener\">Operation AI Comply<\/a>, launched in September 2024, announced five enforcement actions targeting companies making false AI claims or using AI in unfair ways. The FTC focuses on &#8220;AI washing&#8221;, misleading marketing about AI capabilities, and algorithmic discrimination that violates consumer protection standards. [<a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2024\/09\/ftc-announces-crackdown-deceptive-ai-claims-schemes\" target=\"_blank\" rel=\"noopener\">5<\/a>]<\/p>\n<h3>Department of Health and Human Services (HHS)<\/h3>\n<p>HHS ensures AI systems comply with HIPAA privacy and security rules. Since April 2003, HHS has received 374,321 HIPAA complaints, resolved 31,191 cases with corrective actions, and collected $144,878,972 in penalties across 152 enforcement actions. [<a href=\"https:\/\/www.hhs.gov\/hipaa\/for-professionals\/compliance-enforcement\/data\/enforcement-highlights\/index.html\" target=\"_blank\" rel=\"noopener\">6<\/a>] The Office for Civil Rights has clarified that AI vendors processing protected health information typically qualify as business associates, triggering HIPAA compliance obligations. [<a href=\"https:\/\/www.hhs.gov\/hipaa\/for-professionals\/covered-entities\/index.html\" target=\"_blank\" rel=\"noopener\">7<\/a>]<\/p>\n<h3>Equal Employment Opportunity Commission (EEOC)<\/h3>\n<p>The EEOC prevents discriminatory AI use in hiring and employment decisions. The agency enforces Title VII of the Civil Rights Act, the Americans with Disabilities Act, and other civil rights laws when AI tools produce discriminatory outcomes. Recent guidance emphasizes that employers remain liable for AI vendor discrimination. [<a href=\"https:\/\/www.eeoc.gov\/history\/eeoc-history-2020-2024\" target=\"_blank\" rel=\"noopener\">8<\/a>]<\/p>\n<h3>National Institute of Standards and Technology (NIST)<\/h3>\n<p>NIST AI framework develops voluntary AI standards that influence regulatory approaches across agencies. The <a title=\"ai risk management\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-risk-management-audit\/\">AI Risk Management<\/a> Framework (AI RMF) provides structured guidance for identifying, assessing, and mitigating AI risks. While voluntary, NIST standards often become de facto requirements as agencies reference them in enforcement actions. [<a href=\"https:\/\/www.nist.gov\/itl\/ai-risk-management-framework\" target=\"_blank\" rel=\"noopener\">9<\/a>]<\/p>\n<figure class=\"mb50 tac\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1862 size-full\" src=\"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/AI-RMF-Timeline-and-Engagements.webp\" alt=\"AI RMF Timeline and Engagements\" width=\"1336\" height=\"700\" \/><\/figure>\n<h3>Department of Justice (DOJ)<\/h3>\n<p>The DOJ enforces civil rights laws when AI systems discriminate in areas like housing, lending, and public accommodations. The department has signalled increased focus on algorithmic discrimination and is developing guidance jointly with other agencies.<\/p>\n<p>The DOJ maintains an<a href=\"https:\/\/www.justice.gov\/crt\/ai\" target=\"_blank\" rel=\"noopener\"> AI and Civil Rights resource page<\/a> with guidance documents, enforcement actions, and coordination initiatives. [<a href=\"https:\/\/www.justice.gov\/crt\/ai\" target=\"_blank\" rel=\"noopener\">10<\/a>]<\/p>\n<h2>Major Legislative Proposals and Executive Orders<\/h2>\n<p>Federal policymakers have pursued AI governance through both executive action and proposed legislation, though comprehensive federal AI law remains elusive.<\/p>\n<h3>AI Bill of Rights Blueprint<\/h3>\n<p>The White House Office of Science and Technology Policy released the <a href=\"https:\/\/bidenwhitehouse.archives.gov\/ostp\/ai-bill-of-rights\/\" target=\"_blank\" rel=\"noopener\">Blueprint for an AI Bill of Rights<\/a> in October 2022. This non-binding framework identifies five principles:<\/p>\n<ul>\n<li>Safe and effective systems through testing and monitoring<\/li>\n<li>Algorithmic discrimination protections<\/li>\n<li><a title=\"Generative AI Data Privacy\" href=\"https:\/\/pacific.ai\/staging\/3667\/generative-ai-data-privacy-issues-challenges\/\">Data privacy<\/a> safeguards<\/li>\n<li>Notice and explanation when AI affects decisions<\/li>\n<li>Human alternatives and fallback options<\/li>\n<\/ul>\n<p>While not legally enforceable, the Blueprint influences agency rulemaking and sets expectations for responsible AI practices.<\/p>\n<figure class=\"mb50 tac\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1863 size-full\" src=\"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/Blueprint-for-an-AI-Bill-of-Rights_Key-Principles.webp\" alt=\"Blueprint for an AI Bill of Rights: Key Principles\" width=\"1336\" height=\"700\" \/><\/figure>\n<h3>Executive Order 14110<\/h3>\n<p>President Biden issued AI <a href=\"https:\/\/bidenwhitehouse.archives.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence<\/a> on October 30, 2023. The order directed federal agencies to develop AI <a title=\"healthcare ai safety\" href=\"https:\/\/pacific.ai\/staging\/3667\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/\">safety<\/a> standards, protect privacy, advance equity, and promote innovation. It required companies developing large AI models to report training activities and established new guidelines for federal AI procurement.<\/p>\n<p>However, the order was revoked in January 2025, demonstrating how AI policy can shift between administrations.<\/p>\n<h3>Algorithmic Accountability Act of 2025<\/h3>\n<p>This is a proposed bill; if passed, it would make companies run impact checks on high-risk AI. These checks include performance, bias, discrimination, privacy, and security risks. The bill keeps getting introduced, however hasn&#8217;t passed yet. [<a href=\"https:\/\/www.congress.gov\/bill\/119th-congress\/house-bill\/5511\" target=\"_blank\" rel=\"noopener\">11<\/a>]<\/p>\n<p>National AI strategy and US AI legislation continue to evolve. Organizations are now developing <a href=\"https:\/\/pacific.ai\/staging\/3667\/product\/\" target=\"_blank\" rel=\"noopener\">testing for Generative AI<\/a> capabilities, rather than waiting for federal mandates. Proactive testing identifies bias, validates performance, and documents compliance efforts that regulators increasingly expect.<\/p>\n<h2>Sector-Specific AI Regulations (Healthcare, Finance, Employment)<\/h2>\n<p>Different sectors face distinct AI compliance requirements based on existing laws and industry-specific risks:<\/p>\n<h3>Healthcare AI Compliance<\/h3>\n<p>AI in healthcare compliance for healthcare organizations consists of multiple regulatory frameworks:<\/p>\n<ul>\n<li><b>HIPAA<\/b> requires <a href=\"https:\/\/www.hhs.gov\/hipaa\/for-professionals\/covered-entities\/sample-business-associate-agreement-provisions\/index.html\" target=\"_blank\" rel=\"noopener\">Business Associate Agreements<\/a> with AI vendors, technical safeguards for protected health information, and comprehensive audit trails.<\/li>\n<li><b>FDA device regulation<\/b> applies when AI diagnoses, treats, or prevents disease, with requirements varying by risk classification. [<a href=\"https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-software-medical-device\" target=\"_blank\" rel=\"noopener\">4<\/a>]<\/li>\n<li><b>State medical practice laws<\/b> may restrict AI&#8217;s role in clinical decision-making.<\/li>\n<li><b>Anti-discrimination laws<\/b> prohibit AI systems that produce disparate health outcomes based on protected characteristics. [<a href=\"https:\/\/www.federalregister.gov\/documents\/2024\/05\/06\/2024-08711\/nondiscrimination-in-health-programs-and-activities\" target=\"_blank\" rel=\"noopener\">12<\/a>]<\/li>\n<\/ul>\n<p>The intersection of these requirements creates complex compliance obligations. A clinical decision support system might simultaneously fall under FDA device regulation, HIPAA privacy rules, and civil rights protections. <a href=\"https:\/\/pacific.ai\/staging\/3667\/introduction-to-generative-ai-governance-in-healthcare\/\">Generative AI governance in healthcare<\/a> addresses these unique challenges through specialized frameworks.<\/p>\n<h3>Finance Sector AI Rules<\/h3>\n<p>Financial services firms face stringent oversight of AI in finance regulation:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.fdic.gov\/banker-resource-center\/fair-lending\" target=\"_blank\" rel=\"noopener\"><b>Fair Lending laws<\/b><\/a> (Equal Credit Opportunity Act, Fair Housing Act) prohibit discriminatory lending algorithms.<\/li>\n<li><a href=\"https:\/\/www.ftc.gov\/legal-library\/browse\/statutes\/fair-credit-reporting-act\" target=\"_blank\" rel=\"noopener\"><b>Fair Credit Reporting Act<\/b><\/a> requires accuracy, <a href=\"https:\/\/pacific.ai\/staging\/3667\/fairness-bias-in-frontier-llms-one-word-change-six-clinical-escalations\/\">fairness<\/a>, and transparency in credit decisions.<\/li>\n<li><a href=\"https:\/\/www.fincen.gov\/resources\/statutes-and-regulations\/bank-secrecy-act\" target=\"_blank\" rel=\"noopener\"><b>Bank Secrecy Act<\/b><\/a> and anti-money laundering rules apply to AI-powered transaction monitoring.<\/li>\n<li><a href=\"https:\/\/www.sec.gov\/ai\" target=\"_blank\" rel=\"noopener\"><b>SEC regulations<\/b><\/a> govern AI use in investment advice and trading.<\/li>\n<\/ul>\n<p>Regulators scrutinize AI models for disparate impact on protected classes. The Consumer Financial Protection Bureau has emphasized that fair lending laws apply equally to human and algorithmic decisions.<\/p>\n<h3>Employment AI Guidance<\/h3>\n<p>AI in employment law obliges workplace systems to comply with:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.eeoc.gov\/statutes\/title-vii-civil-rights-act-1964\" target=\"_blank\" rel=\"noopener\"><b>Title VII of the Civil Rights Act<\/b><\/a> prohibits employment discrimination.<\/li>\n<li><a href=\"https:\/\/www.ada.gov\/\" target=\"_blank\" rel=\"noopener\"><b>Americans with Disabilities Act<\/b><\/a> requires reasonable accommodations.<\/li>\n<li><a href=\"https:\/\/www.eeoc.gov\/age-discrimination\" target=\"_blank\" rel=\"noopener\"><b>Age Discrimination in Employment Act<\/b><\/a> protects older workers.<\/li>\n<li><b>State laws<\/b> like <a href=\"https:\/\/www.nyc.gov\/site\/dca\/about\/automated-employment-decision-tools.page\" target=\"_blank\" rel=\"noopener\">New York City&#8217;s Local Law 144 <\/a>require bias audits for automated employment decision tools.<\/li>\n<\/ul>\n<p>The EEOC AI guidance clarifies that employers remain liable when AI vendors\u2019 tools produce discriminatory outcomes. Organizations cannot outsource legal responsibility to technology providers.<\/p>\n<h2>Challenges in Regulating AI Across States and Sectors<\/h2>\n<p>The fragmented regulatory landscape creates substantial challenges for organizations deploying AI at scale across states:<\/p>\n<h3>Enforcement gaps<\/h3>\n<p>No single agency has comprehensive oversight. Systems may violate multiple laws simultaneously, but enforcement depends on which agency investigates. Limited federal-state coordination creates uncertainty.<\/p>\n<h3>Innovation outpacing policy<\/h3>\n<p>AI capabilities advance faster than regulatory frameworks. Generative AI emerged before oversight mechanisms existed. New applications challenge legal categories: Does an AI chatbot providing medical information constitute practicing medicine?<\/p>\n<h3>Sectoral silos<\/h3>\n<p>Agencies regulate within traditional domains without cross-sector coordination. Modern AI systems often span multiple sectors simultaneously, creating compliance complexity.<\/p>\n<h3>Resource constraints<\/h3>\n<p>Regulators lack technical expertise to evaluate complex AI systems. Existing laws predate AI technology, creating uncertainty about liability, explainability requirements, and decision-making boundaries.<\/p>\n<h2>Calls for Federal AI Legislation and What It Could Include<\/h2>\n<p>A growing consensus emerges among experts, industry leaders, and policymakers that comprehensive federal AI legislation is necessary to address regulatory fragmentation.<\/p>\n<h3>Why federal legislation matters<\/h3>\n<p>A national framework would:<\/p>\n<ul>\n<li>Establish consistent standards across states, reducing compliance complexity.<\/li>\n<li>Close gaps in current oversight by creating comprehensive requirements.<\/li>\n<li>Provide legal clarity for organizations and regulators.<\/li>\n<li>Enable US competitiveness by creating a predictable regulatory environment.<\/li>\n<li>Protect individuals through enforceable rights and remedies.<\/li>\n<\/ul>\n<h3>Key components of proposed federal AI laws:<\/h3>\n<h4>Transparency requirements<\/h4>\n<p>Legislation would likely mandate disclosure when AI makes or substantially influences consequential decisions. Organizations would need to explain AI systems\u2019 purposes, data sources, and decision-making logic in accessible language.<\/p>\n<h4>Bias testing and mitigation<\/h4>\n<p>Under AI bias regulation, federal law may require regular bias assessments across demographic categories. Organizations would need to test AI systems before deployment and monitor for discriminatory patterns during operation. Remediation requirements would apply when bias is detected.<\/p>\n<h4>Accountability mechanisms<\/h4>\n<p>Proposed frameworks include:<\/p>\n<ul>\n<li>Clear liability standards for AI-caused harms.<\/li>\n<li>Requirements for human oversight of high-risk decisions.<\/li>\n<li>Audit trails documenting AI system operations.<\/li>\n<li>Incident reporting when AI systems cause harm.<\/li>\n<\/ul>\n<h4>Risk-based regulation<\/h4>\n<p>Most proposals adopt tiered approaches based on AI risk levels. High-risk systems affecting health, safety, civil rights, or employment would face stringent requirements. Such systems might also require robust AI transparency rules and AI accountability protocols. Lower-risk applications would have lighter compliance obligations.<\/p>\n<h4>Enforcement protocols<\/h4>\n<p>Federal legislation would likely establish:<\/p>\n<ul>\n<li>Designated enforcement agencies with AI expertise.<\/li>\n<li>Civil penalties for violations.<\/li>\n<li>Private rights of action for affected individuals.<\/li>\n<li>Regular reporting requirements for high-risk AI deployers, often fulfilled via certified <a href=\"https:\/\/pacific.ai\/staging\/3667\/certification\/\">AI compliance tool<\/a> solutions.<\/li>\n<\/ul>\n<p>Preemption questions and ethical AI use remain contentious. Should federal law preempt state AI regulations, or should states retain authority to impose stricter requirements? Industry groups favor preemption for consistency, while consumer advocates prefer state flexibility particularly on AI transparency rules and AI accountability standards that exceed federal baselines.<\/p>\n<h2>Key Takeaway: Preparing for Compliance in a Shifting Regulatory Environment<\/h2>\n<p>Organizations cannot wait for federal AI legislation. Regulatory expectations exist now through agency enforcement actions, even without explicit statutory mandates. Proactive governance provides a competitive advantage. Companies establishing robust AI oversight today adapt more easily to future regulations. Delay risks rushed compliance, enforcement actions, and reputational damage.<\/p>\n<p><b>Essential preparation steps:<\/b><\/p>\n<ol>\n<li><b>Internal audits.<\/b> Inventory AI systems across the organization. Assess risk levels, regulatory touchpoints, and compliance status.<\/li>\n<li><b>Transparency practices.<\/b> Communicate clearly about AI use. Provide notice when AI influences decisions.<\/li>\n<li><b>Bias testing.<\/b> Establish regular testing protocols across demographic categories. Document methodologies and remediation processes.<\/li>\n<li><b>Documentation standards.<\/b> Maintain comprehensive records covering AI lifecycles from development through monitoring.<\/li>\n<li><b>Governance structures.<\/b> Create cross-functional committees with legal, compliance, IT, and business representatives.<\/li>\n<li><b>Vendor management.<\/b> Implement rigorous due diligence. Require transparency about algorithms and training data.<\/li>\n<li><b>Regulatory monitoring.<\/b> Track federal guidance, state legislation, and enforcement actions from FDA, FTC, and HHS.<\/li>\n<li><b>Staff training.<\/b> Educate employees about responsible AI principles and compliance obligations.<\/li>\n<\/ol>\n<p>Organizations should consider <a href=\"https:\/\/pacific.ai\/staging\/3667\/what-is-a-responsible-ai-audit\/\">responsible AI audits<\/a> to validate governance frameworks. Independent audits identify gaps and demonstrate good-faith efforts to regulators.<\/p>\n<h2><strong>References<\/strong>:<\/h2>\n<p>[1] Quinn Emanuel Urquhart &amp; Sullivan, LLP. \u201cWhen Machines Discriminate: The Rise of AI Bias Lawsuits.\u201d<a href=\"https:\/\/www.quinnemanuel.com\/the-firm\/publications\/when-machines-discriminate-the-rise-of-ai-bias-lawsuits\/\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.quinnemanuel.com\/the-firm\/publications\/when-machines-discriminate-the-rise-of-ai-bias-lawsuits\/<\/a><\/p>\n<p>[2] U.S. Food and Drug Administration. \u201cArtificial Intelligence (AI) and Machine Learning (ML) Enabled Medical Devices.\u201d<a href=\"https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-enabled-medical-devices\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-enabled-medical-devices<\/a><\/p>\n<p>[3] Healthcare Financial Management Association. \u201cHealth System Adoption of AI Outpaces Internal Governance and Strategy.\u201d<a href=\"https:\/\/www.hfma.org\/press-releases\/health-system-adoption-of-ai-outpaces-internal-governance-and-strategy\/\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.hfma.org\/press-releases\/health-system-adoption-of-ai-outpaces-internal-governance-and-strategy\/<\/a><\/p>\n<p>[4] U.S. Food and Drug Administration. \u201cArtificial Intelligence and Software as a Medical Device (SaMD).\u201d<a href=\"https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-software-medical-device\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-software-medical-device<\/a><\/p>\n<p>[5] Federal Trade Commission. \u201cFTC Announces Crackdown on Deceptive AI Claims and Schemes.\u201d Published September 2024.<a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2024\/09\/ftc-announces-crackdown-deceptive-ai-claims-schemes\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2024\/09\/ftc-announces-crackdown-deceptive-ai-claims-schemes<\/a><\/p>\n<p>[6] U.S. Department of Health and Human Services, Office for Civil Rights. \u201cEnforcement Highlights \u2013 Current.\u201d Updated November 21, 2024.<a href=\"https:\/\/www.hhs.gov\/hipaa\/for-professionals\/compliance-enforcement\/data\/enforcement-highlights\/index.html\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.hhs.gov\/hipaa\/for-professionals\/compliance-enforcement\/data\/enforcement-highlights\/index.html<\/a><\/p>\n<p>[7] U.S. Department of Health and Human Services, Office for Civil Rights. \u201cCovered Entities and Business Associates.\u201d Updated August 21, 2024.<a href=\"https:\/\/www.hhs.gov\/hipaa\/for-professionals\/covered-entities\/index.html\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.hhs.gov\/hipaa\/for-professionals\/covered-entities\/index.html<\/a><\/p>\n<p>[8] U.S. Equal Employment Opportunity Commission. \u201cEEOC History: 2020 \u2013 2024.\u201d<a href=\"https:\/\/www.eeoc.gov\/history\/eeoc-history-2020-2024\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.eeoc.gov\/history\/eeoc-history-2020-2024<\/a><\/p>\n<p>[9] National Institute of Standards and Technology. \u201cAI Risk Management Framework.\u201d<a href=\"https:\/\/www.nist.gov\/itl\/ai-risk-management-framework\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.nist.gov\/itl\/ai-risk-management-framework<\/a><\/p>\n<p>[10] U.S. Department of Justice, Civil Rights Division. \u201cArtificial Intelligence and Civil Rights.\u201d<a href=\"https:\/\/www.justice.gov\/crt\/ai\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.justice.gov\/crt\/ai<\/a><\/p>\n<p>[11] U.S. Congress. H.R. 5511 \u2013 Algorithmic Accountability Act of 2025 (119th Congress, 2025\u20132026).<a href=\"https:\/\/www.congress.gov\/bill\/119th-congress\/house-bill\/5511\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.congress.gov\/bill\/119th-congress\/house-bill\/5511<\/a><\/p>\n<p>[12] U.S. Department of Health and Human Services. \u201cNon-Discrimination in Health Programs and Activities.\u201d Federal Register, May 6, 2024.<a href=\"https:\/\/www.federalregister.gov\/documents\/2024\/05\/06\/2024-08711\/nondiscrimination-in-health-programs-and-activities\" target=\"_blank\" rel=\"noopener\"> https:\/\/www.federalregister.gov\/documents\/2024\/05\/06\/2024-08711\/nondiscrimination-in-health-programs-and-activities<\/a><\/p>\n<h2>FAQ<\/h2>\n<p><b>Is there a comprehensive federal AI law in the United States?<\/b><\/p>\n<p>No, the United States does not have a single comprehensive federal AI law like the EU AI Act. Instead, AI is regulated through sector-specific rules administered by various federal agencies, AI executive orders (which can change between administrations), and existing laws applied to AI use cases.<\/p>\n<p><b>Which federal agencies regulate AI in the United States?<\/b><\/p>\n<p>Multiple federal agencies share AI oversight: the FDA regulates medical devices, HHS enforces HIPAA for healthcare data, the FTC monitors consumer protection and deceptive practices, the EEOC addresses employment discrimination, NIST develops technical standards, and the DOJ enforces civil rights laws. Each agency applies its existing statutory authority to AI within its domain.<\/p>\n<p><b>What are the penalties for AI-related regulatory violations?<\/b><\/p>\n<p>Penalties vary by violation type. HIPAA violations can result in fines up to $1.5 million per violation category annually. FTC enforcement actions can include millions in civil penalties plus corrective requirements. FDA violations may result in warning letters, product seizures, or injunctions. Employment discrimination cases can include back pay, compensatory damages, and punitive damages.<\/p>\n<p><b>Do I need FDA approval for all healthcare AI systems?<\/b><\/p>\n<p>Not all healthcare AI systems require FDA approval. Only AI that qualifies as a medical device, meaning it diagnoses, treats, cures, mitigates, or prevents disease, falls under FDA jurisdiction. Administrative AI tools, scheduling systems, and billing applications typically do not require FDA review. However, clinical decision support systems that provide specific treatment recommendations often do require authorization.<\/p>\n<p><b>When does Colorado&#8217;s AI Act take effect and who does it apply to?<\/b><\/p>\n<p>Colorado&#8217;s AI Act (SB 24-205) takes effect June 30, 2026. It applies to developers and deployers of &#8220;high-risk&#8221; AI systems that make or substantially factor into consequential decisions about healthcare, employment, education, financial services, housing, insurance, or legal services. Healthcare organizations using AI that significantly impacts patient care or access to services likely fall under this law&#8217;s requirements.<\/p>\n<p><b>What is the FTC&#8217;s Operation AI Comply?<\/b><\/p>\n<p>Operation AI Comply is an FTC enforcement initiative launched in September 2024 targeting companies that make deceptive claims about AI capabilities or use AI in unfair or deceptive ways. The FTC announced five initial enforcement actions, focusing on what it calls &#8220;AI washing&#8221; \u2013\u00a0false or misleading claims about AI functionality. This signals increased regulatory scrutiny of AI marketing and deployment practices.<\/p>\n<p><b>How often should we audit our AI systems for compliance?<\/b><\/p>\n<p>Audit frequency depends on risk level. High-risk AI systems affecting clinical decisions should undergo quarterly reviews. Moderate-risk systems may require semi-annual audits. Low-risk administrative AI can be reviewed annually. Any significant algorithm changes, performance drift, or regulatory updates should trigger immediate compliance review. Organizations should also conduct audits before major deployments and after any regulatory enforcement actions in their sector.<\/p>\n<p><b>What is the AI Bill of Rights?<\/b><\/p>\n<p>The Blueprint for an AI Bill of Rights is a non-binding framework released by the White House in October 2022. It identifies five principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. While not legally enforceable, it influences agency rulemaking and sets expectations for responsible AI practices.<\/p>\n<p><b>What documentation should we maintain for AI systems?<\/b><\/p>\n<p>Maintain comprehensive records including: training data sources and characteristics, validation and testing results (including bias assessments across demographic groups), deployment parameters and limitations, human oversight protocols, performance monitoring data, security measures, incident reports, vendor contracts and Business Associate Agreements, and all regulatory submissions or correspondence. Documentation should cover the entire AI lifecycle from development through decommissioning and be readily accessible for regulatory inquiries.<\/p>\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Is there a comprehensive federal AI law in the United States?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"No, the United States does not have a single comprehensive federal AI law like the EU AI Act. Instead, AI is regulated through sector-specific rules administered by various federal agencies, AI executive orders (which can change between administrations), and existing laws applied to AI use cases.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Which federal agencies regulate AI in the United States?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Multiple federal agencies share AI oversight: the FDA regulates medical devices, HHS enforces HIPAA for healthcare data, the FTC monitors consumer protection and deceptive practices, the EEOC addresses employment discrimination, NIST develops technical standards, and the DOJ enforces civil rights laws. Each agency applies its existing statutory authority to AI within its domain.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What are the penalties for AI-related regulatory violations?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Penalties vary by violation type. HIPAA violations can result in fines up to $1.5 million per violation category annually. FTC enforcement actions can include millions in civil penalties plus corrective requirements. FDA violations may result in warning letters, product seizures, or injunctions. Employment discrimination cases can include back pay, compensatory damages, and punitive damages.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Do I need FDA approval for all healthcare AI systems?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Not all healthcare AI systems require FDA approval. Only AI that qualifies as a medical device\u2014meaning it diagnoses, treats, cures, mitigates, or prevents disease\u2014falls under FDA jurisdiction. Administrative AI tools, scheduling systems, and billing applications typically do not require FDA review. However, clinical decision support systems that provide specific treatment recommendations often do require authorization.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"When does Colorado\u2019s AI Act take effect and who does it apply to?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Colorado\u2019s AI Act (SB 24-205) takes effect June 30, 2026. It applies to developers and deployers of high-risk AI systems that make or substantially factor into consequential decisions about healthcare, employment, education, financial services, housing, insurance, or legal services. Healthcare organizations using AI that significantly impacts patient care or access to services likely fall under this law\u2019s requirements.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is the FTC\u2019s Operation AI Comply?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Operation AI Comply is an FTC enforcement initiative launched in September 2024 targeting companies that make deceptive claims about AI capabilities or use AI in unfair or deceptive ways. The FTC announced five initial enforcement actions, focusing on what it calls AI washing\u2014false or misleading claims about AI functionality\u2014signaling increased regulatory scrutiny of AI marketing and deployment practices.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How often should we audit our AI systems for compliance?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Audit frequency depends on risk level. High-risk AI systems affecting clinical decisions should undergo quarterly reviews. Moderate-risk systems may require semi-annual audits. Low-risk administrative AI can be reviewed annually. Any significant algorithm changes, performance drift, or regulatory updates should trigger immediate compliance review.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is the AI Bill of Rights?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"The Blueprint for an AI Bill of Rights is a non-binding framework released by the White House in October 2022. It outlines five principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. While not legally enforceable, it influences agency rulemaking and expectations for responsible AI practices.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What documentation should we maintain for AI systems?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Organizations should maintain records covering the full AI lifecycle, including training data sources and characteristics, validation and bias testing results, deployment parameters and limitations, human oversight protocols, performance monitoring data, security controls, incident reports, vendor contracts and Business Associate Agreements, and all regulatory submissions or correspondence. Documentation should be readily accessible for audits and regulatory inquiries.\"\n      }\n    }\n  ]\n}\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>US AI regulation importance cannot be overstated for the entire industry and its global dominance. It affects both domestic and international markets. Currently, US AI oversight is fragmented and sector-specific patchwork rather than a comprehensive federal law. The FDA authorizes medical devices, HHS enforces HIPAA, and the FTC targets algorithmic bias through Operation AI Comply. [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":1859,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"nf_dc_page":"","content-type":"","inline_featured_image":false,"footnotes":""},"categories":[118],"tags":[],"class_list":["post-1858","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-articles"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI regulations in the US<\/title>\n<meta name=\"description\" content=\"Explore the evolving landscape of AI regulations in the US, including key federal agencies, laws, and sector-specific rules shaping responsible AI governance.\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI regulations in the US\" \/>\n<meta property=\"og:description\" content=\"Explore the evolving landscape of AI regulations in the US, including key federal agencies, laws, and sector-specific rules shaping responsible AI governance.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/pacific.ai\/ai-regulations-in-the-us\/\" \/>\n<meta property=\"og:site_name\" content=\"Pacific AI\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-13T08:18:08+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-16T11:00:29+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/11\/AI-regulations-in-the-US-preview.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"550\" \/>\n\t<meta property=\"og:image:height\" content=\"440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Oksana Meier\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Oksana Meier\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"14 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/\"},\"author\":{\"name\":\"Oksana Meier\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/0b044eb000be91a76b3fc2b64f8b7dd5\"},\"headline\":\"AI Regulations in the US\",\"datePublished\":\"2025-11-13T08:18:08+00:00\",\"dateModified\":\"2026-03-16T11:00:29+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/\"},\"wordCount\":2911,\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/AI-regulations-in-the-US-preview.jpg\",\"articleSection\":[\"Articles\"],\"inLanguage\":\"en\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/\",\"name\":\"AI regulations in the US\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/AI-regulations-in-the-US-preview.jpg\",\"datePublished\":\"2025-11-13T08:18:08+00:00\",\"dateModified\":\"2026-03-16T11:00:29+00:00\",\"description\":\"Explore the evolving landscape of AI regulations in the US, including key federal agencies, laws, and sector-specific rules shaping responsible AI governance.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/#primaryimage\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/AI-regulations-in-the-US-preview.jpg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/AI-regulations-in-the-US-preview.jpg\",\"width\":550,\"height\":440,\"caption\":\"AI regulations in the United States illustrated by a connected US map with security lock, representing state-level AI laws, compliance requirements, and AI governance oversight.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/ai-regulations-in-the-us\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/pacific.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI Regulations in the US\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"name\":\"Pacific AI\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\",\"name\":\"Pacific AI\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"width\":182,\"height\":41,\"caption\":\"Pacific AI\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/Pacific-AI\\\/61566807347567\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/pacific-ai\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/0b044eb000be91a76b3fc2b64f8b7dd5\",\"name\":\"Oksana Meier\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/cropped-OksanaMeier_1-96x96.png\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/cropped-OksanaMeier_1-96x96.png\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/cropped-OksanaMeier_1-96x96.png\",\"caption\":\"Oksana Meier\"},\"description\":\"Oksana is an experienced Product Marketing Manager at Pacific AI and an active contributor to open-source AI initiatives. She specializes in ethical AI and implementation strategies for AI and ML solutions. Oksana holds a Master's degree in Information Control Systems and Technology and is currently pursuing an International EMBA at the University of St. Gallen (HSG).\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/oksanameier\\\/\"],\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/author\\\/oksana\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI regulations in the US","description":"Explore the evolving landscape of AI regulations in the US, including key federal agencies, laws, and sector-specific rules shaping responsible AI governance.","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"AI regulations in the US","og_description":"Explore the evolving landscape of AI regulations in the US, including key federal agencies, laws, and sector-specific rules shaping responsible AI governance.","og_url":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/","og_site_name":"Pacific AI","article_publisher":"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","article_published_time":"2025-11-13T08:18:08+00:00","article_modified_time":"2026-03-16T11:00:29+00:00","og_image":[{"width":550,"height":440,"url":"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/11\/AI-regulations-in-the-US-preview.jpg","type":"image\/jpeg"}],"author":"Oksana Meier","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Oksana Meier","Est. reading time":"14 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/#article","isPartOf":{"@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/"},"author":{"name":"Oksana Meier","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/0b044eb000be91a76b3fc2b64f8b7dd5"},"headline":"AI Regulations in the US","datePublished":"2025-11-13T08:18:08+00:00","dateModified":"2026-03-16T11:00:29+00:00","mainEntityOfPage":{"@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/"},"wordCount":2911,"publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"image":{"@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/AI-regulations-in-the-US-preview.jpg","articleSection":["Articles"],"inLanguage":"en"},{"@type":"WebPage","@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/","url":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/","name":"AI regulations in the US","isPartOf":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#website"},"primaryImageOfPage":{"@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/#primaryimage"},"image":{"@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/AI-regulations-in-the-US-preview.jpg","datePublished":"2025-11-13T08:18:08+00:00","dateModified":"2026-03-16T11:00:29+00:00","description":"Explore the evolving landscape of AI regulations in the US, including key federal agencies, laws, and sector-specific rules shaping responsible AI governance.","breadcrumb":{"@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/pacific.ai\/ai-regulations-in-the-us\/"]}]},{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/#primaryimage","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/AI-regulations-in-the-US-preview.jpg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/AI-regulations-in-the-US-preview.jpg","width":550,"height":440,"caption":"AI regulations in the United States illustrated by a connected US map with security lock, representing state-level AI laws, compliance requirements, and AI governance oversight."},{"@type":"BreadcrumbList","@id":"https:\/\/pacific.ai\/ai-regulations-in-the-us\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/pacific.ai\/"},{"@type":"ListItem","position":2,"name":"AI Regulations in the US"}]},{"@type":"WebSite","@id":"https:\/\/pacific.ai\/staging\/3667\/#website","url":"https:\/\/pacific.ai\/staging\/3667\/","name":"Pacific AI","description":"","publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/pacific.ai\/staging\/3667\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Organization","@id":"https:\/\/pacific.ai\/staging\/3667\/#organization","name":"Pacific AI","url":"https:\/\/pacific.ai\/staging\/3667\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","width":182,"height":41,"caption":"Pacific AI"},"image":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","https:\/\/www.linkedin.com\/company\/pacific-ai\/"]},{"@type":"Person","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/0b044eb000be91a76b3fc2b64f8b7dd5","name":"Oksana Meier","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/cropped-OksanaMeier_1-96x96.png","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/cropped-OksanaMeier_1-96x96.png","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/11\/cropped-OksanaMeier_1-96x96.png","caption":"Oksana Meier"},"description":"Oksana is an experienced Product Marketing Manager at Pacific AI and an active contributor to open-source AI initiatives. She specializes in ethical AI and implementation strategies for AI and ML solutions. Oksana holds a Master's degree in Information Control Systems and Technology and is currently pursuing an International EMBA at the University of St. Gallen (HSG).","sameAs":["https:\/\/www.linkedin.com\/in\/oksanameier\/"],"url":"https:\/\/pacific.ai\/staging\/3667\/author\/oksana\/"}]}},"_links":{"self":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1858","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/comments?post=1858"}],"version-history":[{"count":10,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1858\/revisions"}],"predecessor-version":[{"id":2294,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1858\/revisions\/2294"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media\/1859"}],"wp:attachment":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media?parent=1858"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/categories?post=1858"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/tags?post=1858"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}