Top 5 common mistakes ai in healthcare applicants make
The five errors that most often cost ai in healthcare applicants time, money, or rejection — and how to avoid each.
AI-drafted, human-reviewed
How we verify
Each guide is built from authoritative sources (state legislatures, FAA, IRS, DSIRE, OpenStates, etc.), drafted by AI, edited by a second AI pass, polished, then spot-reviewed by a human before publication.
Mistake 1: Assuming "No AI-Specific Law" Means No Compliance Obligation
What applicants do wrong: They search for a dedicated AI healthcare statute, find none (correctly — Alabama, Alaska, Arizona, Arkansas, and California all lack a single omnibus AI healthcare law as of mid-2025), and conclude they can deploy freely until the legislature acts.
Why it costs them: Every one of these states has existing law that applies to AI tools by extension. Alabama's Medical Practice Act (Ala. Code Title 34) and Data Breach Notification Act (Ala. Code § 8-38-1 et seq.) reach AI clinical tools. Arkansas's Medical Records Act (Ark. Code Ann. § 20-9-301 et seq.) treats AI-generated record entries identically to human-authored ones. California's CMIA (California Civil Code §§ 56–56.37) covers any system that processes or infers health data — no AI-specific trigger required.
Applicants who skip compliance work on this assumption typically discover the gap during a HIPAA audit or a state board complaint. Remediation after deployment runs $50,000–$300,000 depending on the number of affected records and whether a breach notification is triggered. Timeline hit: 6–18 months of corrective work.
The fix:
- Map your AI tool against every existing state law in your deployment jurisdiction — medical practice acts, medical records statutes, telehealth rules, and consumer protection statutes — before assuming a clean slate.
- Run a HIPAA Security Risk Analysis that explicitly names the AI tool and its data flows. This is required regardless of state law.
- In states with pending task forces (Alaska's HCR 3, for example), set a calendar alert for the next legislative session. Withdrawn bills frequently return — Arkansas's HB 1816 and HB 1297 were withdrawn in 2025 but their legislative intent is on record and either can be reintroduced.
Mistake 2: Missing the FDA SaMD Determination Before Launch
What applicants do wrong: They classify their AI tool as "clinical decision support" and skip FDA premarket review, assuming software is outside device regulation.
Why it costs them: The FDA's Software as a Medical Device (SaMD) framework applies in every state. If your tool meets the device definition — meaning it's intended to diagnose, treat, cure, mitigate, or prevent disease — it requires premarket clearance or approval regardless of whether Alabama, Arizona, or any other state has passed an AI law. Launching without clearance exposes the vendor to FDA warning letters, injunctions, and civil monetary penalties. A 510(k) submission costs $15,000–$100,000 in preparation fees and takes 3–12 months for FDA review. Doing this after a warning letter adds legal defense costs of $50,000–$250,000 on top.
The fix:
- Run the FDA's four-factor SaMD risk framework against your tool during product design, not after. Document the analysis in writing.
- If your tool provides a specific diagnosis or treatment recommendation without mandatory clinician override, assume device status until FDA guidance says otherwise.
- Use the FDA's Pre-Submission (Q-Sub) program to get written agency feedback before filing — it costs nothing and can save months of back-and-forth.
Mistake 3: Executing a Generic BAA Without AI-Specific Terms
What applicants do wrong: They execute a standard HIPAA Business Associate Agreement (BAA) template with their AI vendor and consider the privacy obligation satisfied.
Why it costs them: A generic BAA doesn't address how the AI model uses PHI for training, whether de-identified data is re-identifiable through model outputs, or what happens to data if the vendor's model is retrained on your patient population. California's CMIA imposes obligations beyond HIPAA — any AI system that "infers" health data is in scope, and a boilerplate BAA won't cover inference-based processing. CMIA violations carry civil penalties; the DMHC has independent enforcement authority over health plans. A single enforcement action can run $100,000–$1,000,000+ depending on scope.
The fix:
| BAA Clause | What to Add for AI |
|---|---|
| Permitted uses of PHI | Explicitly prohibit use of PHI for model training without separate written authorization |
| Subcontractors | Name any cloud infrastructure or model-hosting subprocessors |
| Breach notification | Require notification if model outputs expose PHI indirectly |
| Data return/destruction | Specify what happens to PHI embedded in model weights |
| Audit rights | Include right to audit AI system logs, not just data stores |
Have legal counsel review the final BAA — this is one place where that advice is genuinely warranted, not a platitude.
Mistake 4: Ignoring Utilization Management and Disclosure Rules in California (and Similar Guardrails Elsewhere)
What applicants do wrong: Health plans and insurers deploy AI for prior authorization or claims review without adding licensed clinician review steps, and patient-facing AI communications go out without disclosure or a human-contact pathway.
Why it costs them: California has enacted utilization management guardrails that prohibit health plans from using AI as the sole basis for coverage denials and require licensed clinician review. Separately, patient-communication disclosure requirements mandate that AI-generated communications identify themselves as such and offer a human alternative. The Department of Managed Health Care (DMHC) has enforcement authority. Non-compliant health plans face plan-level sanctions. Timeline to remediate a workflow built without these steps: 4–9 months, plus potential back-payment obligations on improperly denied claims.
The fix:
- Audit every utilization management workflow. If AI produces a denial recommendation, a licensed clinician must review it before the denial issues — build that step into the system, not as an afterthought.
- Review all patient-facing AI-generated communications (appointment reminders, care gap alerts, chatbot responses). Each needs a disclosure statement and a clear pathway to a human.
- Confirm current statutory requirements directly with CDPH and DMHC — California's rules in this area are actively evolving and the source pages flag that specific penalty amounts require verification against current CMIA text and DMHC regulations.
Mistake 5: Failing to Document Algorithmic Bias Analysis Before Deployment
What applicants do wrong: Applicants deploy AI clinical tools without any documented review of whether the algorithm performs differently across race, ethnicity, sex, or disability status — and without a plan for ongoing monitoring.
Why it costs them: HHS Office for Civil Rights guidance under Section 1557 of the ACA explicitly addresses algorithmic discrimination. A hospital in Birmingham, Phoenix, or Little Rock deploying a diagnostic AI tool faces Section 1557 scrutiny if the tool produces disparate outcomes for protected classes. OCR investigations are triggered by patient complaints and can result in corrective action plans, resolution agreements, and civil monetary penalties. Resolution agreements in health equity cases have historically ranged from $50,000 to several million dollars depending on scope and willfulness. Beyond federal exposure, California's CPRA grants consumers rights regarding automated decision-making with significant effects — health data exemptions are narrow.
The fix:
- Before deployment, obtain and review the vendor's bias validation data. Ask specifically: what demographic subgroups were in the training data, and what is the performance differential across those groups?
- Document your review in writing. If the vendor can't provide this data, that is itself a material procurement risk.
- Build a post-deployment monitoring schedule — quarterly performance reviews stratified by race, ethnicity, and sex at minimum.
- Assign a named individual responsible for Section 1557 compliance for AI tools. OCR looks for this during investigations.
Quick Reference: Cost and Timeline Summary
| Mistake | Typical Cost if Caught Late | Timeline to Fix |
|---|---|---|
| Assuming no law applies | $50,000–$300,000 (remediation + notification) | 6–18 months |
| Skipping FDA SaMD review | $65,000–$350,000 (510(k) + legal defense) | 3–12 months |
| Generic BAA without AI terms | $100,000–$1,000,000+ (CMIA/HIPAA enforcement) | 2–6 months to renegotiate |
| Missing UM/disclosure rules | Plan sanctions + claim back-payments | 4–9 months |
| No bias documentation | $50,000–$2,000,000+ (OCR resolution) | 3–12 months |
All five mistakes share the same root cause: treating AI deployment as a technology project rather than a simultaneous multi-jurisdiction compliance project. Build the compliance workstream in parallel with product development, not after go-live.
Frequently Asked Questions
Why doesn't the state regulate AI in healthcare specifically?
States like Alabama, Alaska, Arizona, Arkansas, and California have not enacted specific AI healthcare laws, as existing regulations are deemed sufficient to cover AI tools under broader medical and data privacy statutes.
What laws apply to AI in healthcare in these states?
While there are no specific AI laws, existing regulations such as the Medical Practice Acts, Medical Records Acts, and HIPAA guidelines apply to AI tools, requiring compliance with established medical and data privacy standards.
Are there any active legislative proposals regarding AI in healthcare?
Yes, some states have pending task forces and proposals, such as Alaska's HCR 3, which could lead to future legislation affecting AI in healthcare; it's important to monitor these developments.
What do businesses do given the absence of specific state law on AI in healthcare?
Businesses typically adhere to existing medical and data privacy laws, conduct thorough compliance checks, and may seek guidance from legal experts to navigate the complex regulatory landscape.
How does the regulation of AI in healthcare in these states compare to neighboring states?
Regulation varies significantly; some neighboring states may have more robust or specific laws regarding AI in healthcare, while others may also rely on existing medical and privacy statutes similar to those in Alabama and California.
Related guides
Gear & Tools for Multi-state Projects
Affiliate disclosure: some links below are affiliate links (Amazon and partner programs). If you buy through them, we may earn a small commission at no extra cost to you. Product selection is not influenced by commission — see our full disclosure.