AI in Healthcare Is Expanding Fast. What FDA-Authorized Tools Can — and Can’t — Do for Patients

| | |

FDA-authorized medical AI is already common in scans and workflow tools. Here’s what authorization means, what it does not, and what patients should ask.

Artificial intelligence is already part of U.S. health care, even if most patients never see it directly. The biggest practical question is not whether AI exists in medicine anymore. It is whether a specific tool is being used safely, fairly, and with clear human oversight.

That matters because many AI-enabled medical devices have already been cleared or authorized by the FDA. But an FDA decision is not the same thing as proof that a tool improves care for every patient in every hospital, or that it works equally well across all populations in everyday practice.

AI is already in routine care, often behind the scenes

For most patients, medical AI does not look like a humanoid robot or a chatbot making final decisions. It is more likely to be software working quietly in the background.

Today’s FDA device list, current as of March 4, 2026, runs well past 1,000 AI-enabled medical devices. Radiology dominates that list, with many tools designed to help detect, sort, reconstruct, or measure findings on imaging studies. Other authorized devices appear in cardiovascular care, neurology, gastroenterology, pathology, anesthesiology, ophthalmology, and other specialties.

In plain terms, patients are most likely to encounter AI in places such as:

  • Imaging support, such as software that flags a possible bleed, lung nodule, fracture, or suspicious mammogram finding for review.
  • Cardiology tools, including software that analyzes heart rhythm data or helps interpret certain scans and measurements.
  • Procedure support, such as systems that assist with colon polyp detection or help guide a specialist during a procedure.
  • Workflow support, including programs that prioritize urgent cases, summarize information, or help clinicians manage large volumes of studies and messages.

These systems are usually assistants, not fully independent decision-makers. A clinician is still supposed to interpret the result in context.

What FDA authorization does mean

FDA clearance or authorization is meaningful. It means a tool met the regulatory standard for its intended use through one of the agency’s pathways, such as 510(k) clearance, De Novo classification, or premarket approval.

That review can include the software’s intended purpose, how it was tested, whether the study design fits the claim being made, and whether the overall evidence is adequate for marketing in the United States.

For patients, that is an important baseline. An FDA-authorized device is not the same as an unreviewed app making medical claims on the internet.

What FDA authorization does not mean

It does not mean a tool is error-free. It does not mean it has been proven to improve outcomes for every patient group. And it does not mean the tool will work equally well in every clinic, with every scanner, or in every local patient population.

A device can perform well in a company’s validation studies and still run into real-world problems later. That can happen if the testing data were limited, if the tool is used in a different setting than the one it was cleared for, if clinicians are not trained well, or if the software produces too many false alarms or misses too many true problems.

This is one reason experts have pushed for more attention to clinical outcomes, safety, equity, and workflow effects, not just technical accuracy.

Not all health AI is regulated the same way

One source of confusion is that people hear the phrase “AI in health care” and assume it all went through the same FDA process. It did not.

There is a major difference between medical-device AI and generative or administrative AI.

Medical-device AI is software intended to perform a device function, such as helping detect disease, classify an image, guide treatment, or support a diagnostic or procedural claim. Those tools may fall under FDA medical-device oversight.

Generative AI or other administrative AI may be used to draft notes, summarize charts, suggest portal-message replies, translate text, or organize paperwork. Some of these functions may fall outside the medical-device framework, depending on what the tool actually does and what claims are being made for it.

The FDA’s clinical decision support guidance, updated in January 2026, also makes clear that some software functions are excluded from the legal definition of a medical device under federal law. So patients should not assume every AI system in a clinic has the same regulatory status.

That distinction matters in everyday care. A tool helping read a scan is not the same as a tool drafting a visit note. They raise different questions about safety, privacy, evidence, and oversight.

Generative AI may help with paperwork, but that is a different claim

A recent JAMA cohort study looked at one of the fastest-growing uses of health AI: ambient scribes, which listen to visits and draft documentation for clinicians. This was a multisite observational study across five U.S. academic medical centers involving 8,581 clinicians.

The study found that AI-scribe adoption was associated with modest reductions in time spent in the electronic health record and on documentation, along with a small increase in weekly visit volume. That may matter for clinician workload and burnout.

But it is a different kind of evidence than proof that an AI tool improves diagnosis or treatment outcomes for patients. A note-drafting tool and a disease-detection device should not be judged by the same standard.

Why software updates matter so much for AI tools

Medical AI raises a challenge that older devices did not: software can change.

If a company adjusts a model, changes its training data, tunes performance, or expands how the software works, that can affect patient care. The FDA has been paying special attention to this issue because an AI device is not always a static product.

That is where predetermined change control plans come in. Under the FDA’s final guidance for AI-enabled device software functions, a manufacturer can propose in advance what kinds of changes it expects to make, how those changes will be developed and validated, and how the impact on safety and effectiveness will be assessed. The FDA then reviews that plan as part of the device’s submission.

For patients, the key point is simple: this is not a system where companies are supposed to let medical AI freely rewrite itself after launch with no oversight. It is an attempt to create a structured, pre-reviewed way to handle certain planned software changes.

Why real-world concerns remain even after authorization

Several concerns keep coming up in the medical literature and in policy discussions.

1. Validation data may be limited

A tool may be trained or tested on data that do not fully represent the people who will later be affected by it. That can matter for age, race and ethnicity, sex, language, disease severity, pregnancy status, disability, rural settings, or the specific equipment used by a health system.

2. Performance can be uneven across populations

If a tool works better in one group than another, that can widen existing health disparities. This is one reason physicians and regulators keep emphasizing subgroup testing, transparency, and post-market monitoring.

3. False positives and false negatives still happen

AI can miss disease. It can also flag harmless findings and send patients into extra testing, extra cost, and extra anxiety. Faster is not always better if the tradeoff is more unnecessary alarms.

4. Clinicians can over-rely on software

JAMA commentaries on AI safety have warned that the risk is not only bad software. It is also bad human-software interaction. If a clinician trusts a tool too much, or if the interface makes it hard to independently review the basis for a recommendation, patient harm can follow.

5. Generative AI creates its own risks

Generative systems can produce convincing but wrong text. In medicine, that means a draft note, summary, or message can sound polished while leaving out important details or inserting errors that still need human review.

6. Privacy and data use are not one-size-fits-all

A regulated imaging tool, a hospital portal assistant, and a third-party note-writing system may handle data differently. Patients should not assume that every AI product in a clinical setting has the same privacy protections, contractual safeguards, or rules for secondary use of data.

7. Coverage and payment can be uneven

Even when a tool is clinically useful, payment does not always line up neatly. Hospitals and physician groups have told federal officials that reimbursement for AI-related services and infrastructure is still incomplete and inconsistent. For patients, the practical takeaway is to ask about possible extra costs before agreeing to follow-up testing or monitoring prompted by an AI-assisted finding.

Why workflow and human factors matter as much as the algorithm

The Agency for Healthcare Research and Quality is backing work focused on human factors and AI safety for a reason. A technically strong tool can still fail if it is dropped into a busy clinic without good training, clear escalation rules, or a way for clinicians to check its reasoning.

In other words, a health system does not become safer just because it bought an AI product. Real safety depends on how the tool is implemented, monitored, explained, and corrected when problems appear.

Recent reporting from Reuters on AI-assisted surgical devices also underscored a broader point: some problems only become visible after tools are in wider use. Adverse-event reports do not prove causation in every case, but they are a reminder that post-market surveillance matters.

Questions patients can ask when AI affects their care

If an AI-assisted result changes a test, diagnosis, or treatment plan, it is reasonable to ask a few direct questions:

  • Was an AI tool used here? If so, what role did it play?
  • Did a clinician personally review the result?
  • Was this tool tested in patients like me? That could mean similar age, sex, race or ethnicity, language, pregnancy status, health conditions, or care setting.
  • What are the main limits of this tool? Can it miss problems or overcall them?
  • What happens if the AI output and the clinician’s judgment do not match?
  • Is this tool helping with diagnosis, or is it just helping with paperwork and workflow?
  • Will this lead to extra testing or added cost? Is that covered by insurance?
  • How is my data being used and stored?
  • Should I get a second opinion if the decision is high stakes?

These are not confrontational questions. They are part of informed care.

What this means for readers

AI may already be involved in your care even if you never hear the word during an appointment. In many cases, that can be helpful. It may speed image review, reduce clerical work, or help flag a problem that needs a closer look.

But patients should not confuse FDA authorization with a blanket guarantee of accuracy, fairness, or better outcomes. Medical AI is still a tool. Like any tool, it depends on how well it was designed, how honestly it was tested, how carefully it is updated, and whether a human clinician is still using judgment.

The best patient question is often the simplest one: How did this AI tool affect the decision being made about me?

Sources

This article is for general informational purposes only and is not medical advice. Research findings can be early, limited, or subject to change as new evidence emerges. For personal guidance, diagnosis, or treatment, consult a licensed clinician. For current outbreak or public health guidance, follow your local health department, the CDC, or another relevant public health authority.