AI Healthcare Regulations in New Hampshire: A Comprehensive Guide
Understand the current AI healthcare regulations in New Hampshire, including federal FDA oversight, state medical practice laws, and data privacy requirements for providers.
AI-drafted, human-reviewed
How we verify
Each guide is built from authoritative sources (state legislatures, FAA, IRS, DSIRE, OpenStates, etc.), drafted by AI, edited by a second AI pass, polished, then spot-reviewed by a human before publication.
Quick Answer: The Regulatory Environment for AI in Healthcare in New Hampshire
No single NH law governs AI in healthcare. Providers operate under a patchwork of federal device law, federal privacy law, and state professional licensing statutes that apply to AI by implication.
Primary regulatory bodies include:
- FDA for AI/ML software qualifying as a medical device or Software as a Medical Device (SaMD).
- HHS Office for Civil Rights for HIPAA compliance when AI systems process protected health information (PHI).
- NH Board of Medicine (and parallel boards for nursing, pharmacy, etc.) for professional conduct and standard-of-care obligations.
- NH Department of Justice for data breach notification under state law.
The absence of a dedicated NH AI statute means providers bear the interpretive burden. When regulations do not specifically address AI, underlying professional conduct and patient safety obligations still apply.
Federal Oversight: The FDA's Role in AI/ML Medical Devices and Software
The FDA is the most direct regulatory authority for many NH providers deploying clinical AI.
Software as a Medical Device (SaMD)
The FDA adopted the International Medical Device Regulators Forum (IMDRF) definition of SaMD: software intended for one or more medical purposes that performs those purposes without being part of a hardware medical device. An AI algorithm analyzing a chest X-ray for nodules or a machine learning model predicting sepsis onset typically qualifies as SaMD and falls under FDA jurisdiction.
The FDA's guidance document "Clinical Decision Support Software" (2022) distinguishes between regulated and unregulated software. Software intended to support clinical decision-making, where a clinician cannot independently review the basis for the recommendation, is more likely regulated. Providers should consult this guidance before assuming a vendor's tool is exempt.
Premarket Review Pathways
| Pathway | When It Applies | Key Requirement |
|---|---|---|
| 510(k) | Device is substantially equivalent to a legally marketed predicate | Substantial equivalence demonstration |
| De Novo | Novel, low-to-moderate risk device with no predicate | Risk-based classification request |
| PMA (Premarket Approval) | High-risk device (Class III) | Valid scientific evidence of safety and effectiveness |
Most AI/ML diagnostic tools currently reach the market through 510(k) or De Novo pathways. A cross-sectional analysis of FDA-authorized oncology AI/ML devices found that clinical evidence supporting authorization varies, with some relying on retrospective data and limited external validation (Litt H et al., Journal of Cancer Policy, 2026 [PMID 42025919]). NH providers should view FDA authorization as a baseline, not a comprehensive endorsement, when evaluating clinical AI tools.
The Total Product Lifecycle (TPLC) Framework
AI/ML algorithms are not static. Models trained on one dataset can drift as patient populations, imaging equipment, or clinical workflows change. The FDA's proposed framework for modifications to AI/ML-based SaMD, "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device," addresses this. The TPLC approach requires manufacturers to submit a Predetermined Change Control Plan (PCCP) describing anticipated algorithm modifications and their validation methods.
NH providers should ask AI vendors if their product has an FDA-approved PCCP and request documentation of algorithm update validation before deployment.
Orthopedic and Specialty AI Devices: A Caution
A 2025 study found that few FDA-approved AI/ML orthopedic devices have EU MDR equivalents or peer-reviewed validation studies supporting their clinical claims (Bracken A et al., Clinical Orthopaedics and Related Research, 2025 [PMID 41915013]). This suggests a broader pattern. NH providers in specialty practices should independently verify the clinical validation for any AI tool, not relying solely on FDA clearance as proof of real-world performance.
New Hampshire's Regulatory Landscape for AI in Healthcare
As of mid-2025, New Hampshire has no statute or administrative rule specifically addressing artificial intelligence in healthcare. There is no NH equivalent to Colorado's AI Act or California's proposed health AI transparency bills. Instead, general professional practice laws apply to AI use by implication.
Medical Practice Act: NH RSA 329
NH RSA Chapter 329 governs the practice of medicine in New Hampshire. It defines medical practice, establishes licensure, and grants the NH Board of Medicine authority to discipline physicians for unprofessional conduct, incompetence, or conduct endangering patient health. Using an AI diagnostic tool that produces an erroneous result without appropriate clinical oversight could violate the standard of care under RSA 329, even without explicit AI mention.
The Board of Medicine's administrative rules address professional conduct standards. Providers should contact the Board directly to confirm any guidance specific to AI or clinical decision support software.
Nursing Practice Act: NH RSA 326-B
NH RSA Chapter 326-B governs nursing practice and grants the NH Board of Nursing authority over scope of practice and professional conduct. Nurses relying on AI-generated alerts or recommendations in clinical workflows carry the same professional accountability as for any clinical judgment. The "human in the loop" principle is a professional licensing requirement embedded in existing scope-of-practice law.
Patient Rights and Consumer Protection
NH RSA Chapter 151 governs healthcare facilities and includes patient rights provisions. While not AI-specific, patient rights to informed consent and access to information about their care apply when AI tools influence diagnosis or treatment recommendations. Providers should assess whether their informed consent processes adequately disclose AI use, particularly for tools influencing high-stakes clinical decisions.
The NH Consumer Protection Act (NH RSA 358-A) could apply if AI-related misrepresentations are made to patients, though no NH enforcement actions specific to healthcare AI have been publicly documented.
Ethical Considerations and Data Privacy in NH AI Healthcare Deployments
HIPAA Compliance
Any AI system ingesting, processing, or outputting PHI is subject to HIPAA. This includes the Privacy Rule (45 CFR Part 160 and Part 164, Subparts A and E) and the Security Rule (45 CFR Part 164, Subparts A and C), applying to the AI system and any business associate agreement (BAA) with the AI vendor.
Key HIPAA obligations for AI deployments:
- Execute a BAA with every AI vendor handling PHI.
- Conduct a Security Risk Analysis before deploying any new AI system that touches PHI.
- Ensure the AI system's data retention, access logging, and audit trail capabilities meet Security Rule requirements.
- Verify de-identification methods for AI training data meet Safe Harbor or Expert Determination standards under 45 CFR §164.514.
NH RSA 359-C: Data Breach Notification
NH RSA Chapter 359-C requires any person doing business in New Hampshire that owns or licenses computerized data including personal information to notify affected NH residents following a security breach. "Personal information" under RSA 359-C:3 includes medical and health insurance information when combined with a name or other identifier.
An AI system breach, whether through vendor compromise, model inversion attack, or unauthorized access to training data, could trigger RSA 359-C notification obligations. Providers must ensure incident response plans explicitly address AI system breaches.
Algorithmic Bias and Informed Consent
Algorithmic bias is a known issue in clinical AI. Models trained on non-representative datasets can produce systematically worse results for certain patient populations. A study on Medicaid managed care procurement found systematic overemphasis of technology and equity performance claims across 32 states, suggesting vendor representations about equity and performance warrant scrutiny (Basu S et al., Inquiry, 2026 [PMID 42012014]).
The American Medical Association and the NH Medical Society emphasize physician responsibility for AI-assisted decisions. Consult the AMA's "Augmented Intelligence in Medicine" policy and the NH Medical Society directly for current guidance. The physician remains accountable for the clinical decision, regardless of algorithm recommendations.
Informed consent for AI use is an evolving area. No NH statute currently mandates disclosure of AI involvement in clinical decision-making, but general informed consent doctrine under NH common law and RSA 329 supports disclosure when AI meaningfully influences a clinical recommendation.
Emerging Trends and Future Regulatory Outlook for AI in Healthcare
Federal Activity
The federal regulatory landscape is evolving. The Biden administration's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (October 2023) directed HHS to develop an AI safety program for healthcare. The ONC has incorporated AI considerations into its Health Data, Technology, and Interoperability (HTI-1) final rule, affecting certified health IT developers. NIST's AI Risk Management Framework (AI RMF 1.0) provides a voluntary, but increasingly referenced, structure for managing AI risk in high-stakes domains like healthcare.
NH providers implementing NIST AI RMF practices are better positioned for future mandatory compliance requirements.
Potential NH Legislative Action
New Hampshire has historically taken a light-touch approach to technology regulation. However, national momentum toward AI transparency and accountability legislation may prompt NH legislative activity, particularly if a high-profile AI-related patient safety incident occurs in the state. Providers should monitor the NH General Court (legislature.nh.gov) for relevant bill introductions in the 2025-2026 session.
Medicaid and Equity Considerations
For NH providers participating in Medicaid managed care, the Basu et al. research (PMID 42012014) highlights a concern: technology and equity claims made by managed care organizations in procurement documents may not reflect actual performance. NH providers contracting with managed care organizations should request evidence-based validation of any AI tools embedded in those contracts, particularly tools affecting care management or utilization review.
Resources and Compliance Guidance for NH Healthcare Providers
Key Agencies to Contact
| Agency | Jurisdiction | Contact |
|---|---|---|
| FDA Center for Devices and Radiological Health (CDRH) | SaMD, AI/ML medical devices | fda.gov/medical-devices |
| HHS Office for Civil Rights | HIPAA compliance | hhs.gov/ocr |
| NH Board of Medicine | Physician licensing, professional conduct | medicine.nh.gov |
| NH Board of Nursing | Nursing scope of practice | oplc.nh.gov/nursing |
| NH DHHS | State health programs, Medicaid | dhhs.nh.gov |
| NH Department of Justice | RSA 359-C breach notification | doj.nh.gov |
Due Diligence When Selecting AI Tools
Before deploying any clinical AI tool, verify the following:
- FDA clearance or approval status, and the specific 510(k), De Novo, or PMA number.
- Whether the vendor has a signed BAA template and their breach notification SLA.
- The demographic composition of the training dataset and any published validation studies.
- Whether the algorithm has been validated on a patient population comparable to yours.
- The vendor's process for communicating algorithm updates and revalidation results.
Legal Counsel and Staff Training
Engage legal counsel with combined healthcare and technology law experience before signing AI vendor contracts. Standard vendor agreements often include indemnification carve-outs for AI errors that shift liability to the provider.
Staff training should cover the AI tool's function, limitations, how to override or escalate when AI output conflicts with clinical judgment, and how to document AI-assisted decisions in the medical record. The NH Medical Society (nhms.org) offers state-specific professional guidance and continuing education resources.
Ongoing Monitoring
AI system performance requires scheduled review, not just initial deployment. Track outcome metrics stratified by patient subgroup to detect emerging bias. Establish a clear internal process for reporting AI-related near-misses or adverse events, integrating AI systems into existing patient safety reporting infrastructure.
Related guides
Gear & Tools for New Hampshire Projects
Affiliate disclosure: some links below are affiliate links (Amazon and partner programs). If you buy through them, we may earn a small commission at no extra cost to you. Product selection is not influenced by commission — see our full disclosure.