AI Healthcare Regulations in Michigan: A Comprehensive Guide
Understand Michigan's current regulatory landscape for AI in healthcare, including federal oversight, data privacy, and ethical considerations for providers.
AI-drafted, human-reviewed
How we verify
Each guide is built from authoritative sources (state legislatures, FAA, IRS, DSIRE, OpenStates, etc.), drafted by AI, edited by a second AI pass, polished, then spot-reviewed by a human before publication.
Quick Answer: Michigan's Approach to AI in Healthcare
Michigan defers to federal regulatory frameworks for AI in healthcare, lacking standalone legislation for clinical AI. Practice is governed by a combination of:
- Federal law: FDA oversight of Software as a Medical Device (SaMD), HIPAA privacy and security rules, and Federal Trade Commission guidance on algorithmic accountability.
- State law: Michigan's Public Health Code (MCL 333.1101 et seq.) and the Michigan Medical Records Access Act (MCL 333.26261 et seq.), neither of which mentions AI directly but both apply to how providers collect, use, and disclose patient information.
- Professional standards: Guidance from the American Medical Association (AMA) and the Michigan State Medical Society (MSMS) on clinical oversight and informed consent.
Michigan providers using AI diagnostic tools, clinical decision support algorithms, or automated prior authorization systems are subject to multiple regulatory layers. These frameworks were not designed for AI, creating compliance ambiguity and legal exposure.
Current Regulatory Framework: Michigan and Federal Overlap
Federal Mandates That Apply Directly in Michigan
FDA and Software as a Medical Device
The FDA regulates AI tools that meet the definition of a medical device under 21 U.S.C. §321(h). The agency's Digital Health Center of Excellence has published guidance distinguishing between clinical decision support (CDS) software, which may be exempt from device regulation if it provides information for healthcare professionals to review independently, and Software as a Medical Device (SaMD), which requires premarket review due to its direct diagnostic or therapeutic function. A cross-sectional analysis of FDA-authorized oncology AI and machine learning devices found that the clinical evidence supporting many authorized tools is limited in scope and generalizability (Litt H et al., Journal of Cancer Policy, PubMed 42025919). Michigan providers should not assume FDA authorization equals clinical validation for their specific patient population.
The FDA's framework for AI/ML-based SaMD (published by the FDA's Center for Devices and Radiological Health) describes a "predetermined change control plan" that allows developers to update algorithms post-market within pre-approved parameters. When procuring an AI tool, ask the vendor whether their product is FDA-authorized, under what classification, and whether any post-market algorithm changes have been filed.
HIPAA
The Health Insurance Portability and Accountability Act applies to any AI system that processes protected health information (PHI). This means:
- Training an AI model on patient records without a valid Business Associate Agreement (BAA) with the developer constitutes a HIPAA violation (45 CFR §164.308, administered by the HHS Office for Civil Rights). A BAA ensures the vendor, as a business associate, is contractually obligated to protect PHI with the same rigor as the covered entity, including specific provisions for data use, security, and breach reporting.
- De-identification of data used to train or validate AI tools must meet the standards under 45 CFR §164.514(b), either the Expert Determination or Safe Harbor method.
- If an AI vendor experiences a breach involving PHI, your organization bears notification obligations under the HIPAA Breach Notification Rule (45 CFR §§164.400-414).
Run every AI procurement through your privacy officer before signing a contract. The BAA is not optional.
Michigan-Specific Statutes
Michigan Public Health Code (MCL 333.1101 et seq.)
The Public Health Code is the foundational statute governing healthcare practice in Michigan. It establishes licensure requirements for physicians, nurses, and other clinicians (MCL 333.16101 et seq.), defines the standard of care, and grants the Michigan Department of Licensing and Regulatory Affairs (LARA) authority to discipline licensees for unprofessional conduct. While the code does not reference AI, its provisions apply directly to how clinicians use AI outputs.
A licensed physician who acts on a flawed AI recommendation without applying independent clinical judgment can face discipline under MCL 333.16221. This section broadly covers unprofessional conduct, including negligence, incompetence, and lack of good moral character. These could all apply if a clinician delegates core diagnostic or treatment responsibilities to an AI without proper oversight. The AI tool is not the licensee; you are.
Michigan Medical Records Access Act (MCL 333.26261 et seq.)
This statute governs patient rights to access their medical records. If AI-generated outputs, such as risk scores, diagnostic suggestions, or treatment recommendations, are incorporated into the medical record, patients have the right to access them. The act does not explicitly address whether providers must explain the underlying methodology of an AI score or recommendation. However, as AI tools become more integrated into care, the ability to provide transparent explanations of AI-generated outputs becomes crucial for patient understanding and trust, directly impacting informed consent discussions.
Michigan Department of Health and Human Services (MDHHS)
MDHHS oversees Medicaid managed care contracts, public health programs, and healthcare facility licensing in Michigan. The department has not issued AI-specific guidance as of this writing. However, MDHHS-administered Medicaid managed care procurements are subject to the same scrutiny applied nationally. A 2026 study examining Medicaid managed care procurement across 32 states found a systematic overemphasis on technology and equity performance claims that often lacked rigorous supporting evidence (Basu S et al., Inquiry, PubMed 42012014). This suggests that while vendors frequently highlight AI's potential to improve outcomes and reduce disparities, the empirical data to substantiate these claims in procurement documents is often insufficient. Michigan providers and managed care organizations should treat vendor equity claims about AI tools with the same skepticism applied to any unvalidated clinical claim. Consult MDHHS for current Medicaid managed care contract requirements.
Key Legal and Ethical Considerations for AI in Michigan Healthcare
Data Privacy and Security
HIPAA sets the federal floor, but Michigan adds its own layer through the Michigan Medical Records Access Act (MCL 333.26261 et seq.) and the Identity Theft Protection Act (MCL 445.61 et seq.), which governs breach notification for personal information more broadly. When an AI system processes PHI, your exposure spans both frameworks.
Risks to manage:
- Vendor data use: Some AI vendors retain de-identified patient data to retrain their models. Confirm contractually whether your patient data is used for this purpose and whether your BAA addresses it. Data minimization principles should be applied to AI training datasets where feasible.
- Cloud processing: AI inference often happens in cloud environments. Ensure your vendor's infrastructure meets HIPAA Security Rule requirements (45 CFR §164.312) for transmission and storage.
- Audit logs: The HIPAA Security Rule requires audit controls (45 CFR §164.312(b)). AI systems that access EHR data should generate logs sufficient to reconstruct who queried what and when.
Algorithmic Bias and Health Equity
AI systems trained on historically biased datasets reproduce and sometimes amplify existing disparities. Research on health outcomes among American Indian and Alaska Native Medicare beneficiaries documents significant disparities in Alzheimer's disease and related dementia diagnoses and related health conditions (Yang M et al., Alzheimer's & Dementia, PubMed 42002809). If an AI diagnostic tool was not validated on populations similar to your patient panel, its outputs may be systematically less accurate for those patients. This could lead to misdiagnosis, delayed treatment, or inappropriate care for specific demographic groups.
Michigan providers serving diverse or underserved populations should ask vendors for disaggregated performance data by race, ethnicity, age, and sex before deployment. If the vendor cannot provide it, that is a red flag. The AMA's Augmented Intelligence in Medicine policy calls for transparency in AI training data and performance metrics across demographic subgroups. The Michigan State Medical Society has endorsed principles of equitable AI deployment, though specific binding guidance should be confirmed directly with MSMS.
Liability for AI Errors
Michigan has not enacted legislation allocating liability specifically for AI-related medical errors. Under existing tort law, the analysis defaults to standard medical malpractice principles governed by MCL 600.2912a. This requires proof that the provider failed to meet the standard of care as practiced by a reasonably prudent clinician in the same or similar circumstances.
The liability map:
| Party | Potential Exposure | Legal Basis |
|---|---|---|
| Treating clinician | Malpractice for failing to exercise independent judgment | MCL 600.2912a (standard of care) |
| Hospital/health system | Vicarious liability; negligent credentialing of AI tools | MCL 600.2912a (vicarious liability for employee actions); common law (negligent credentialing/supervision) |
| AI developer | Products liability; negligent design or failure to warn | Michigan products liability law (defective design, manufacturing defect, failure to warn) |
| EHR vendor integrating AI | Depends on contract terms and degree of customization | Contract (breach of warranty); tort (negligence in integration/customization) |
No Michigan appellate court has issued a published opinion specifically addressing AI malpractice liability as of this writing. Consult your malpractice carrier and legal counsel before deploying any AI tool in a clinical decision-making role.
Patient Consent and Transparency
Michigan's informed consent statute (MCL 333.17013 for physicians) requires that patients receive information material to a reasonable patient's decision to accept or refuse treatment. Whether the use of an AI diagnostic tool constitutes "material information" that must be disclosed to patients, particularly regarding its limitations or potential biases, remains unsettled in Michigan law. However, evolving ethical guidelines suggest a move towards greater transparency.
The AMA's policy on augmented intelligence recommends that patients be informed when AI plays a significant role in their care. This implies documenting in the consent process that AI-assisted tools may be used, their function, and that a licensed clinician reviews and takes responsibility for all recommendations. This documentation also serves as a liability shield.
Professional Responsibility and Clinical Oversight
AI does not hold a medical license in Michigan.
Related guides
Gear & Tools for Michigan Projects
Affiliate disclosure: some links below are affiliate links (Amazon and partner programs). If you buy through them, we may earn a small commission at no extra cost to you. Product selection is not influenced by commission — see our full disclosure.