StateReg.Reference

Federal AI in Healthcare Regulations 2026: FDA SaMD, HIPAA, HHS Section 1557

Federal regulations for ai in healthcare in 2026: agencies, statutes, tax credits, preemption analysis, and links to all 50 state guides.

Verified May 12, 20265 statute sources
AI-drafted, human-reviewed

How we build these guides

Sourcing

Adapters pull primary data from the FAA, IRS, OpenStates, DSIRE, NORML, PubMed, Census/BLS/FRED, Google Civic, and Data.gov.

Generation pipeline

Multi-stage AI pipeline: structural outline → long-form draft → cross-family fact-check editor → readability polish → FAQ enrichment. Each stage uses a different model family so factual drift is caught before publish.

Quality gates

Soft gates on word count, citation count, and banned-phrase screening; hard blocks if required sections are missing.

Verification cadence

Pages are re-verified quarterly. verified_at updates on every pass.

Not legal advice. Consult an attorney or CPA for binding guidance.

FederalAI in healthcare

Federal Regulators

Food and Drug Administration (FDA): The FDA regulates AI-based medical devices, including software that diagnoses, treats, or prevents disease. Under its Software as a Medical Device (SaMD) framework, the agency reviews AI systems for safety and effectiveness before they reach the market, applying different levels of scrutiny depending on the device's risk classification. The FDA also oversees algorithm modifications and updates through adaptive pathways designed for continuously learning systems.

HHS Office for Civil Rights (OCR): OCR enforces HIPAA's Privacy and Security Rules as they apply to AI systems that create, receive, maintain, or transmit protected health information (PHI). The office investigates breaches, audits covered entities' compliance with safeguards for AI tools processing patient data, and issues guidance on permissible uses of health information in algorithm training and deployment. OCR also enforces Section 1557 nondiscrimination requirements when AI systems are used by entities receiving federal health funding.

Office of the National Coordinator for Health Information Technology (ONC): ONC establishes standards for interoperability and certifies health IT systems, including AI-enabled clinical decision support tools embedded in electronic health records. The office implements information-blocking rules under the 21st Century Cures Act that affect how AI developers access patient data and how healthcare providers share AI-generated insights. ONC also develops predictive algorithms and decision-support criteria for certification programs.

Centers for Medicare & Medicaid Services (CMS): CMS determines reimbursement policy for AI-enabled services and devices under Medicare and Medicaid programs, directly affecting market viability. The agency establishes coverage criteria, billing codes, and payment rates for AI diagnostic and therapeutic tools. CMS also enforces conditions of participation that may require or restrict certain AI uses in hospitals and other facilities receiving federal healthcare dollars.

Key Federal Statutes & Rules

FDA Software as a Medical Device (SaMD) Guidance: The FDA applies a risk-based framework set forth in guidance documents implementing provisions of the Federal Food, Drug, and Cosmetic Act, 21 U.S.C. § 360c. Software that meets the definition of a medical device under 21 U.S.C. § 321(h) requires premarket review—either 510(k) clearance or premarket approval (PMA)—depending on its risk classification. The FDA's Digital Health Center of Excellence publishes non-binding guidance describing when clinical decision support software is and is not regulated, with lower scrutiny for tools that provide information to healthcare professionals who independently review the basis for recommendations.

HIPAA Privacy & Security Rules: The Privacy Rule at 45 C.F.R. Part 160 and Part 164, Subparts A and E, governs how covered entities and business associates may use and disclose PHI, including for AI model training and deployment. The Security Rule at 45 C.F.R. § 164.306 requires administrative, physical, and technical safeguards for electronic PHI processed by AI systems. Business associates that provide AI services to covered entities must sign agreements under 45 C.F.R. § 164.504(e) accepting direct liability for breaches and impermissible uses.

HHS AI/ML in Health Care Request for Information: In 2024, HHS issued a multi-agency RFI (Docket No. HHS-OS-2023-0025) soliciting public input on accountability, fairness, and transparency in AI and machine learning applications across the healthcare sector. While not itself a regulation, this RFI signals agency priorities and informs future rulemaking on algorithmic bias, explainability requirements, and data governance standards.

21st Century Cures Act § 4002: Codified at 42 U.S.C. § 300jj-52, Section 4002 restricts practices that constitute information blocking—conduct that interferes with the access, exchange, or use of electronic health information. The statute carved out an exception at 42 U.S.C. § 300jj-51(b)(1) for certain clinical decision support software that does not meet the medical device definition, exempting it from FDA premarket review while still subjecting it to interoperability requirements under ONC rules at 45 C.F.R. Part 171.

Section 1557 Nondiscrimination: Section 1557 of the Affordable Care Act, 42 U.S.C. § 18116, prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in health programs receiving federal funds. The implementing regulation at 45 C.F.R. Part 92 requires covered entities to ensure that AI tools do not perpetuate or exacerbate health disparities. OCR has clarified that algorithmic bias resulting in differential treatment or disparate impact may constitute a violation, triggering investigation and corrective action.

Federal vs. State: Who Has Authority?

Federal preemption in AI healthcare regulation follows both express and implied models. The FDA's medical device authorities under the Federal Food, Drug, and Cosmetic Act expressly preempt state requirements that are "different from, or in addition to" federal requirements for Class III devices under 21 U.S.C. § 360k(a), though states retain traditional police powers over the practice of medicine and professional licensing. HIPAA at 42 U.S.C. § 1320d-7 generally preempts contrary state privacy laws but allows states to impose more stringent privacy protections, creating a federal floor rather than a ceiling—states like California have enacted stricter health data privacy regimes that apply to AI systems.

The 21st Century Cures Act establishes nationwide interoperability standards but does not occupy the field of AI safety regulation outside the device context, leaving room for state consumer protection laws, unfair business practice statutes, and professional liability rules. States cannot create separate approval pathways for FDA-regulated devices but may impose requirements on data use, algorithmic transparency, and impact assessments that do not directly conflict with federal device clearances.

Several states have enacted AI-specific legislation requiring impact assessments, bias audits, or explainability documentation that applies to healthcare algorithms. These laws coexist with federal requirements when they address gaps in federal coverage—for example, transparency obligations for non-device AI tools or consent requirements for algorithm-driven care recommendations. Businesses deploying AI in healthcare must comply with both federal baseline requirements and the most stringent applicable state law in their jurisdiction, creating a complex patchwork compliance environment.

Preemption challenges typically arise when state tort law imposes liability standards different from FDA's risk-benefit determinations. Courts apply the Supreme Court's framework from Riegel v. Medtronic and subsequent cases, generally finding preemption for claims that would impose requirements beyond federal device approvals while allowing parallel claims based on federal violations. Professional malpractice claims based on negligent use of AI tools generally survive preemption because they enforce standards of care rather than device design requirements.

Pending Federal Legislation

Congress routinely considers bills addressing AI transparency, algorithmic accountability, and patient safety in healthcare settings. Proposed legislation typically falls into several categories: measures requiring explainability and documentation for AI clinical decision support tools; bills establishing federal standards for bias testing and ongoing monitoring of algorithmic performance across demographic groups; proposals creating new FDA review pathways or expanding exemptions for low-risk AI applications; and legislation directing HHS to develop best practices for AI procurement and deployment by federally funded healthcare entities.

Several proposed frameworks would establish federal AI registries requiring developers to disclose training data sources, validation methods, and intended use cases to regulators and healthcare purchasers. Other bills address liability allocation among AI developers, healthcare institutions, and practitioners when algorithmic recommendations contribute to patient harm. Workforce-focused proposals would fund training programs to ensure clinicians can appropriately supervise and override AI systems.

Because specific bills evolve rapidly—with amendments, committee markups, and changing prospects for passage—readers should consult live congressional tracking resources rather than relying on named legislation in static guidance. The Congress.gov database maintained by the Library of Congress provides current bill text, status, and procedural history. Healthcare organizations should monitor bills that have advanced beyond introduction to committee hearings or floor consideration, as these indicate serious legislative momentum that may affect compliance planning timelines.

Frequently Asked Questions

Does my clinical decision support tool require FDA approval?

Whether your AI tool requires FDA clearance depends on its intended use and risk profile. Under 21 U.S.C. § 360j(o), clinical decision support software is exempt from device regulation if it meets four criteria: intended for healthcare professionals (not patients); provides information with the basis for recommendations; allows independent review of data and methodology; and does not direct users to specific diagnostic or treatment decisions. If your software analyzes medical images to detect disease, predicts clinical outcomes driving immediate interventions, or automates diagnostic conclusions without displaying underlying reasoning, it likely qualifies as a device requiring premarket review. Low-risk tools that support rather than replace clinical judgment may qualify for enforcement discretion under FDA guidance. Consult FDA's Digital Health Center or regulatory counsel for product-specific classification before marketing.

What HIPAA obligations apply when training AI models on patient data?

AI model training using protected health information requires compliance with 45 C.F.R. § 164.506 (permissible uses) and § 164.514 (de-identification standards). Covered entities may use PHI for their own healthcare operations, including developing and implementing decision support tools, without patient authorization. External AI developers serving as business associates must sign agreements under § 164.504(e) and may use PHI only as permitted by the agreement. For training models on multi-institution datasets, options include: obtaining HIPAA authorizations; using data meeting the Safe Harbor or Expert Determination de-identification standards under § 164.514(b); relying on a waiver from an IRB or Privacy Board under the Common Rule at 45 C.F.R. § 46.116(f); or using limited datasets with data use agreements under § 164.514(e). Each approach carries specific documentation and safeguard requirements that must be satisfied before model training begins.

How do I prove my AI system doesn't violate Section 1557's nondiscrimination requirements?

Demonstrating Section 1557 compliance under 42 U.S.C. § 18116 and 45 C.F.R. Part 92 requires proactive disparities testing and documentation. Conduct bias audits measuring your algorithm's performance across demographic groups defined by race, ethnicity, sex, age, disability status, and language preference before deployment. Document training data representativeness and validation study demographics. Implement ongoing monitoring to detect performance degradation or emerging disparities post-deployment. Establish feedback mechanisms allowing patients and providers to report concerns about differential treatment. Maintain policies prohibiting use of protected characteristics as direct input variables unless clinically justified and evidence-based. When disparities emerge, document clinical justification or implement corrective measures such as recalibration, additional training data collection, or use restrictions. OCR expects covered entities to apply the same civil rights safeguards to AI-driven decisions as to human decision-making, with algorithmic tools held to standards of non-disparate treatment and non-disparate impact.

Can states require additional AI disclosures beyond federal requirements?

Yes, in most contexts. HIPAA's preemption provision at 42 U.S.C. § 1320d-7(a)(1) allows states to impose more stringent privacy protections. States may require disclosure of AI use in diagnosis or treatment recommendations, mandate patient consent for algorithmic decision-making, or establish transparency obligations regarding training data and accuracy metrics without conflicting with HIPAA's federal floor. FDA device preemption under 21 U.S.C. § 360k(a) prevents states from imposing different safety or effectiveness requirements on approved medical devices, but states may require labeling about AI involvement or use restrictions in particular clinical contexts as long as these don't contradict FDA's clearance conditions. States retain broad authority over professional licensing and scope-of-practice rules, allowing them to require physician oversight of AI tools, mandate specific training before clinicians may rely on algorithms, or establish liability standards for negligent use. Healthcare organizations operating in multiple states must identify and comply with the most stringent applicable requirements across their service areas.

What records should I keep to demonstrate compliance with federal AI healthcare requirements?

Comprehensive documentation is essential for demonstrating compliance across regulatory regimes. Maintain algorithm development records including: training and validation datasets with demographic composition; performance metrics overall and by subgroup; intended use specifications and contraindications; clinical validation study results; and FDA submissions for device-classified tools. For HIPAA compliance under 45 C.F.R. § 164.530(j), retain business associate agreements, de-identification certifications or expert determinations, authorization forms if obtained, and data use agreements. Document security safeguards under § 164.316 including risk assessments, access controls, audit logs of PHI access by AI systems, and breach response procedures. Keep Section 1557 compliance records such as bias audit results, disparities analyses, and remediation efforts. Maintain contracts with AI vendors specifying responsibilities for regulatory compliance, data rights, and liability allocation. Retention periods follow HIPAA's six-year requirement under § 164.530(j)(2), though FDA and state laws may impose longer periods. These records enable response to regulatory inquiries, support defense against liability claims, and demonstrate good-faith compliance efforts that may mitigate enforcement actions.


State-by-State Guides

Federal law sets the floor — but every state layers its own rules on top. Find your state's specifics:

<!-- FED_BILLS_LIVE_START -->

Pending Federal AI in Healthcare Legislation

Live data from Congress.gov. Updated daily. Pending = introduced and not yet enacted, vetoed, or signed into law.

SRES 490 (119th Congress)

What it does: A resolution affirming the critical importance of preserving the United States' advantage in artificial intelligence and ensuring that the United States achieves and maintains artificial intelligence dominance.

Latest status: Referred to the Committee on Foreign Relations. (2025-11-06)

HRES 836 (119th Congress)

What it does: Calling on the United States to champion a regional artificial intelligence strategy in the Americas to foster inclusive artificial intelligence systems that combat biases within marginalized groups and promote social justice, economic well-being, and democratic values.

Latest status: Referred to the Committee on Foreign Affairs, and in addition to the Committee on Science, Space, and Technology, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (2025-10-28)

HRES 694 (119th Congress)

What it does: Expressing the sense of the House of Representatives that the Centers for Medicare & Medicaid Services should halt the pilot program and should not jeopardize seniors' access to critical health care by utilizing artificial intelligence to determine Medicare coverage.

Latest status: Referred to the Committee on Ways and Means, and in addition to the Committee on Energy and Commerce, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (2025-09-10)

S 2606 (119th Congress)

What it does: A bill to require the Commander of United States Cyber Command to complete development of a roadmap for industry collaboration on artificial intelligence-enabled cyber capabilities for cyberspace operations of the Department of Defense, and for other purposes.

Latest status: Read twice and referred to the Committee on Armed Services. (2025-07-31)

S 2381 (119th Congress)

What it does: PROACTIV Artificial Intelligence Data Act of 2025.

Latest status: Read twice and referred to the Committee on Commerce, Science, and Transportation. (2025-07-22)

S 1638 (119th Congress)

What it does: Protection Against Foreign Adversarial Artificial Intelligence Act of 2025.

Latest status: Read twice and referred to the Committee on Commerce, Science, and Transportation. (2025-05-07)

HR 3210 (119th Congress)

What it does: Artificial Intelligence Literacy and Inclusion Act.

Latest status: Referred to the Committee on Science, Space, and Technology, and in addition to the Committees on Education and Workforce, Small Business, and Energy and Commerce, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (2025-05-06)

S 501 (119th Congress)

What it does: Strategy for Public Health Preparedness and Response to Artificial Intelligence Threats.

Latest status: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (2025-02-10)

S 321 (119th Congress)

What it does: Decoupling America's Artificial Intelligence Capabilities from China Act of 2025.

Latest status: Read twice and referred to the Committee on the Judiciary. (2025-01-29)

S 232 (119th Congress)

What it does: Preventing Algorithmic Collusion Act of 2025.

Latest status: Read twice and referred to the Committee on the Judiciary. (2025-01-23)

Source: Congress.gov. Data refreshes daily — verify with the linked bill page before relying on it.

<!-- FED_BILLS_LIVE_END -->
Sources & Verification (5)
  • Code of Federal Regulations (eCFR.gov) — primary source for federal regulatory text.
  • Congress.gov — full text and status of pending federal legislation.
  • Federal Register — proposed and final rules, agency notices.
  • IRS.gov — Internal Revenue Code, tax credits, and reporting guidance.
  • GovInfo.gov — authoritative federal publications and statutes.

Last verified: May 12, 2026

Editorial process: See methodology →

How we verify: 9 source adapters (FAA, DSIRE, IRS, OpenStates, etc.) → AI draft → AI editor → AI polish → spot human review.

Affiliate disclosure — we may earn a commission

More tools for AI in healthcare

Affiliate disclosure: some links below are affiliate links (Amazon and partner programs). If you buy through them, we may earn a small commission at no extra cost to you. Product selection is not influenced by commission — see our full disclosure.