How the FDA Regulates Artificial Intelligence in Healthcare — And What That Means for Patients in 2026
AI tools are increasingly used in imaging, diagnostics, and clinical decision support. Here’s how the FDA regulates artificial intelligence as medical devices, what Software as a Medical Device (SaMD) means, and what patients should know about safety, updates, and limits in 2026.
Why AI regulation matters to patients in 2026
If you’ve had a scan reviewed by software that highlights possible abnormalities, used a clinic portal that flags high-risk lab results, or been told an algorithm helped prioritize your case, you’ve likely encountered artificial intelligence (AI) in healthcare.
In the United States, many of these tools are regulated by the U.S. Food and Drug Administration (FDA). That oversight matters. It helps ensure that certain AI systems used for diagnosing disease or guiding treatment meet standards for safety and effectiveness.
But not all health apps are regulated the same way. And FDA clearance does not mean a tool is flawless or better than a clinician. It means the product met regulatory standards for its specific, intended use.
Here’s what that actually means for patients and families in 2026.
What counts as a medical device — and what is “Software as a Medical Device”?
The FDA regulates medical devices, which include not just physical tools like pacemakers or imaging machines, but also certain types of software.
When software performs a medical function on its own — such as analyzing an X-ray to detect a fracture or evaluating retinal images for diabetic eye disease — it may be classified as Software as a Medical Device (SaMD).
According to the FDA’s overview of AI and machine learning–enabled devices, software generally falls under medical device regulation if it is intended to:
- Diagnose a disease or condition
- Detect or predict a health problem
- Guide treatment decisions
- Analyze medical images or physiologic data in a way that affects care
Common examples include:
- Imaging analysis tools that flag possible lung nodules or strokes
- Software that detects diabetic retinopathy from retinal photos
- Algorithms that analyze heart rhythms for atrial fibrillation
- Clinical decision support tools that recommend next steps based on patient data
These products are different from most step counters, meditation apps, or general wellness trackers. Many consumer health apps that promote fitness or lifestyle goals are not regulated as medical devices, because they are not intended to diagnose or treat disease.
That distinction matters. A regulated device must meet FDA standards. A general wellness app may not.
How the FDA reviews and clears AI-enabled devices
The FDA does not have a single “AI approval” label. Instead, AI tools that qualify as medical devices go through established device pathways. At a high level, there are three main routes:
- 510(k) clearance: The manufacturer must show the device is substantially equivalent to an already legally marketed device.
- De Novo classification: Used for novel, lower- to moderate-risk devices that do not have a clear comparison device.
- Premarket Approval (PMA): Required for higher-risk devices; involves more extensive evidence of safety and effectiveness.
Most AI-enabled tools currently on the market have gone through the 510(k) or De Novo pathways rather than PMA.
The FDA maintains a publicly available, regularly updated list of AI/ML-enabled medical devices. As of 2026, the majority are in radiology, with growing numbers in cardiology, neurology, and other specialties. The list illustrates how quickly this category has expanded — especially in imaging.
It’s important to be precise about language. Many devices are FDA-cleared, not “FDA-approved.” Approval is a specific term usually tied to the PMA pathway. Clearance means the device met regulatory requirements for its intended use, not that it is perfect or superior to human clinicians.
How adaptive AI updates are handled: The FDA’s AI/ML Action Plan
Traditional medical devices do not change much after clearance. AI systems can. That creates a regulatory challenge.
In its AI/ML-Based Software as a Medical Device Action Plan, the FDA outlined a “lifecycle” approach to oversight. Instead of reviewing a tool only once, the agency aims to monitor it over time.
A key concept is the Predetermined Change Control Plan. In simple terms, this means a company can propose in advance:
- What kinds of updates it plans to make to the algorithm
- How those changes will be tested
- How safety and performance will be monitored
If the FDA agrees to that plan, certain updates can be made within those boundaries without starting the review process from scratch each time.
This approach is designed to allow improvement — such as retraining a model with new data — while maintaining guardrails around safety and transparency.
What happens after clearance: Real-world monitoring and performance drift
FDA oversight does not end once a device is cleared or approved.
Manufacturers are required to monitor safety and report certain adverse events. The FDA also expects real-world performance monitoring, especially for AI systems that may change over time.
Experts writing in the New England Journal of Medicine and JAMA have raised important concerns about “performance drift.” That’s when an algorithm that worked well in initial studies performs differently in real-world settings — for example, in hospitals serving different populations.
Challenges include:
- Bias: If training data underrepresent certain racial, ethnic, age, or language groups, accuracy may vary.
- Data shift: Changes in clinical practice or patient populations can affect results.
- Workflow effects: A tool may perform differently depending on how it is used in a busy clinical environment.
Clearance means a device met standards based on available evidence. It does not guarantee equal performance for every patient or setting. Ongoing monitoring is critical.
What the FDA does not regulate — and why that matters
Not all AI in healthcare falls under FDA medical device rules.
Many consumer-facing apps that:
- Track fitness or sleep
- Provide general wellness advice
- Offer lifestyle suggestions
are not regulated as medical devices, as long as they do not claim to diagnose or treat disease.
Some health chatbots and symptom checkers operate in gray areas. If they provide general education, they may not require FDA oversight. If they make specific diagnostic or treatment recommendations, they may cross into regulated territory.
For patients, this means you should not assume every health-related app has been reviewed by the FDA. The level of oversight depends on what the tool claims to do.
What patients should ask when AI is used in their care
If an AI tool is involved in your diagnosis or treatment, consider asking:
- Is this tool FDA-cleared or approved?
- What is its intended use?
- How accurate is it for people like me?
- Who reviews the results?
- What happens if the AI and the clinician disagree?
It’s also important to remember that clinicians remain responsible for medical decisions. AI systems are tools. They may assist with pattern recognition or risk prediction, but they do not replace professional judgment, informed consent, or shared decision-making.
Bottom line: Oversight exists — but informed patients still matter
As of March 2026, AI-enabled medical devices are increasingly common in imaging, diagnostics, and clinical decision support across the United States. The FDA regulates many of these tools under its medical device framework, using established pathways and a lifecycle approach to updates and monitoring.
That oversight provides important safeguards. But it does not eliminate uncertainty, bias, or the need for careful clinical use.
For patients and families, the key takeaways are straightforward:
- If an AI tool diagnoses disease or guides treatment, it likely falls under FDA regulation.
- FDA clearance means the tool met standards for its intended use — not that it is flawless.
- Not all health apps are regulated the same way.
- Your clinician remains accountable for your care.
As AI becomes more embedded in healthcare, understanding how it is regulated can help you ask better questions, interpret results more confidently, and stay an active partner in your care.
Sources
- https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
- https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
- https://www.fda.gov/media/145022/download
- https://www.nejm.org/doi/full/10.1056/NEJMp2200745
- https://jamanetwork.com/journals/jama/fullarticle/2789379
This article is for general informational purposes only and is not medical advice. Research findings can be early, limited, or subject to change as new evidence emerges. For personal guidance, diagnosis, or treatment, consult a licensed clinician. For current outbreak or public health guidance, follow your local health department, the CDC, or another relevant public health authority.
