StateReg.Reference

Strictest vs most lenient states for ai in healthcare

Side-by-side: which states impose the heaviest ai in healthcare rules and which are friendliest, with the specific signals that separate them.

Verified April 26, 2026
AI-drafted, human-reviewed

How we verify

Each guide is built from authoritative sources (state legislatures, FAA, IRS, DSIRE, OpenStates, etc.), drafted by AI, edited by a second AI pass, polished, then spot-reviewed by a human before publication.

Multi-stateAI in healthcare

Side-by-Side Ranking Table

StateStrict / LenientKey Signal
ColoradoStrictestSB 24-205 (signed May 17, 2024): enforceable high-risk AI deployer obligations, Feb. 1, 2026 effective date; plus DOI Bulletin B-5.38 on algorithmic insurance decisions, already in force
CaliforniaStrictCMIA (Cal. Civil Code §§ 56–56.37) covers AI health data inference; separate utilization-management guardrails prohibit AI-only denial decisions; CPRA adds consumer rights layer
WashingtonStrictMy Health My Data Act (RCW 70.372), effective March 31, 2024, carries a private right of action — the only state health-data AI rule in this set with that enforcement mechanism
WyomingMost LenientNo AI statute, no board guidance, no state-specific AI rule; W.S. 33-26-101 medical practice act is the only state touchpoint
South DakotaLenientNo enacted AI chapter confirmed by legislature review; SDCL Title 36 standard-of-care rules are the sole state layer
West VirginiaLenientNo AI in Medicine Act, no SaMD registry, no Board of Medicine AI guidance; W. Va. Code Ch. 30, Art. 3 is the only state hook

What Makes a State Strict

Colorado: Enacted Law With a Hard Deadline

Colorado is the only state in this set with a signed, comprehensive AI statute that explicitly names high-risk AI systems. SB 24-205, the Colorado Artificial Intelligence Act, was signed May 17, 2024. It covers developers and deployers of high-risk AI — a category the law extends to clinical decision support tools, diagnostic AI, and prior authorization algorithms. Deployer obligations activate February 1, 2026, giving compliance teams a fixed, citable deadline.

The strictness compounds because Colorado operates two parallel tracks. The Division of Insurance Bulletin B-5.38 separately governs algorithmic models used in health plan underwriting and claims decisions. That bulletin is already in force, meaning Colorado insurers and utilization reviewers face binding AI-related constraints right now, not in 2026. No other state in the source set runs two simultaneous, enforceable AI-specific frameworks for healthcare.

The practical burden: Colorado entities must map every AI deployment to SB 24-205's high-risk classification criteria, satisfy deployer disclosure and risk-management obligations, and separately audit insurance-adjacent AI against B-5.38 — all while maintaining federal HIPAA and FDA compliance.

California: Layered Statutory Obligations Without a Single Omnibus Law

California achieves strictness through accumulation rather than a single statute. Three frameworks stack on top of each other for any AI system touching health data.

The Confidentiality of Medical Information Act (CMIA, Cal. Civil Code §§ 56–56.37) reaches AI systems that process or infer health information — not just systems that store it. That inference-based scope is broader than HIPAA's definition of protected health information and catches AI tools that derive health signals from non-clinical data.

Separate utilization-management legislation prohibits health plans from using AI as the sole basis for coverage denials and requires licensed clinician review. This directly targets the prior authorization AI use case that is proliferating across the industry.

The California Privacy Rights Act (CPRA, Cal. Civil Code §§ 1798.100 et seq.) adds consumer rights — access, correction, deletion, and opt-out of certain automated decisions — that apply to health-adjacent AI even when HIPAA does not.

Patient-communication disclosure rules require providers to identify AI-generated communications and offer a human-contact pathway. That is a real-time operational requirement, not just a documentation obligation.

Washington: Private Right of Action Sets It Apart

Washington's My Health My Data Act (RCW 70.372) is the sharpest enforcement tool in this comparison. It became effective March 31, 2024, for large regulated entities and June 30, 2024, for small businesses. The defining feature is a private right of action — individuals can sue directly, without waiting for a state agency to act. Every other state framework in this set relies on regulatory enforcement or professional board discipline.

MHMD covers consumer health data that falls outside HIPAA's scope, which is precisely the category where AI health apps, wearables, and behavioral analytics tools operate. An AI platform that infers a user's health condition from location data or purchase history is in scope. Washington also layers the Consumer Protection Act (RCW 19.86) on top, reaching deceptive AI health claims as an independent cause of action.


What Makes a State Lenient

Wyoming, South Dakota, and West Virginia: Federal Floor Only

These three states share the same structural characteristic: no enacted AI statute, no agency guidance specific to AI, and no professional board rule that names AI. The only state-level compliance touchpoints are general medical practice acts that predate AI entirely.

Wyoming points providers to W.S. 33-26-101 et seq. (the Wyoming Medical Practice Act) and Wyoming Department of Health facility licensing rules. Neither mentions AI. The Wyoming Board of Medicine is identified as the primary state contact, but no AI policy framework from that board exists in the source material. Compliance is whatever the federal baseline requires.

South Dakota confirms through a legislature review that no enacted chapter addresses AI in clinical or health data contexts. SDCL Title 36 standard-of-care rules apply to clinicians using AI tools, but only because they apply to all clinical tools. There is no additional state layer. A South Dakota hospital deploying an AI diagnostic tool faces FDA SaMD review and HIPAA — and nothing else from the state.

West Virginia explicitly lacks an AI in Medicine Act, a state-level SaMD registry, and published Board of Medicine guidance on AI-assisted clinical decision-making. W. Va. Code Ch. 30, Art. 3 (medical practice) and Ch. 46A-2A-101 et seq. (data breach notification) are the only state statutes with any reach, and both are general-purpose laws written for non-AI contexts.

The Lenient Pattern: What These States Have in Common

The three lenient states share four characteristics that distinguish them from Colorado, California, and Washington:

  1. No enacted AI-specific statute. There is no law with an AI-specific definition, threshold, or obligation.
  2. No agency guidance. State health departments and medical boards have not issued advisories, bulletins, or formal positions on AI in clinical settings.
  3. No disclosure mandate. Providers face no state-level requirement to tell patients when AI influenced a clinical decision or coverage determination.
  4. No private enforcement mechanism. Individuals cannot bring a state-law claim based on AI misuse in healthcare. Enforcement, if it comes at all, runs through professional licensing boards applying general conduct standards.

For vendors and health systems, this means the compliance cost of entering Wyoming, South Dakota, or West Virginia is essentially the cost of federal compliance — FDA clearance if the tool is a medical device, HIPAA security and privacy controls, and FTC Act compliance for consumer-facing claims. No incremental state filing, registration, audit, or disclosure obligation applies.

Frequently Asked Questions

Why doesn't Wyoming regulate AI in healthcare?

Wyoming has opted for a minimal regulatory approach, relying solely on federal standards without enacting specific state laws or guidelines for AI in healthcare.

What federal laws apply to AI in healthcare in states with no specific regulations?

In states like Wyoming, South Dakota, and West Virginia, compliance primarily falls under federal laws such as HIPAA, which governs patient privacy and data security.

Are there any active legislative proposals regarding AI in healthcare in lenient states?

As of now, there are no confirmed active legislative proposals specifically targeting AI in healthcare in Wyoming, South Dakota, or West Virginia.

How does the regulation of AI in healthcare in Colorado compare to neighboring states?

Colorado's regulations, particularly SB 24-205, are among the strictest in the nation, while neighboring states like Wyoming and South Dakota impose no specific AI rules, relying instead on federal baselines.

Affiliate disclosure: some links below are affiliate links (Amazon and partner programs). If you buy through them, we may earn a small commission at no extra cost to you. Product selection is not influenced by commission — see our full disclosure.