A Third of Americans Now Use AI for Health Advice. Here’s When It Can Help and When to Call a Clinician
New U.S. polling shows many people already ask AI about symptoms and test results. Here’s where it can help, where it can mislead, and when to get human care.
A lot of Americans are already asking AI chatbots health questions. New polling from KFF found that about one-third of U.S. adults say they have used AI for health information or advice in the past year. A separate Pew Research Center analysis found that people who use AI chatbots for health information are more likely to describe them as convenient than accurate.
That is the most important starting point for readers: popularity is not proof. Polls can tell us how often people use these tools and why they like them. They do not show that chatbot advice is medically correct, safe, or helpful in real life.
For most people, the best rule is simple: use AI to prepare for care, not replace care.
Why people are turning to AI for health questions
The appeal is easy to understand. AI is fast, available late at night, and usually answers in plain language. KFF found that among people who used AI for health information, the top reason was wanting quick and immediate advice. Some also said they wanted to look up information before seeing a clinician or felt more comfortable asking questions privately.
That convenience shows up in Pew’s findings too. About half of people who get health information from AI chatbots said it is highly convenient, while far fewer rated it as highly accurate.
That gap matters. If a tool feels easy and reassuring, it can be tempting to trust it more than it deserves.
What AI is reasonably helpful for right now
Used carefully, a consumer chatbot can be useful for low-risk organizing and translation tasks. That is different from diagnosing you.
- Translating jargon: It can turn a technical report or after-visit summary into simpler language.
- Summarizing instructions: It may help shorten a long set of discharge instructions into a checklist you can reread.
- Making question lists: If you have a new diagnosis, AI can help you build a list of reasonable questions for your doctor, nurse, or pharmacist.
- Preparing for appointments: It can help you organize a symptom timeline, medication list, or concerns you do not want to forget.
- Reviewing notes: Federal health IT guidance encourages patients to review their visit notes, treatment plans, and test results and prepare questions about what they do not understand. AI may help with that first pass.
That limited role matches the more cautious expert view. A recent JAMA Medical News report noted that many patient-facing AI tools are being built to improve understanding of complex medical information. That is a much narrower and safer goal than acting like a stand-alone diagnostician.
Where AI can go wrong
Health advice gets riskier when a chatbot moves from explaining information to guessing what is wrong, suggesting treatment, or telling you whether something is an emergency.
Here is why.
It usually does not know your full medical context. A safe answer may depend on your age, pregnancy status, allergies, kidney function, medications, recent test results, past diagnoses, and what symptoms are missing from the story. Even when you type in a lot of detail, the chatbot is still not examining you, checking vital signs, or seeing the full chart the way a clinician would.
It can sound confident when it is wrong. The CDC’s guidance on generative AI warns that outputs should be reviewed for accuracy, completeness, and hallucinations or misleading content. In public health, the CDC describes generative AI as a drafting and synthesis aid that still requires human review, not a final authority. Consumers should apply at least that much caution.
It may offer false reassurance. This is one of the biggest risks. A chatbot that tells someone their chest pain is probably heartburn, their shortness of breath is probably anxiety, or their weakness can wait until tomorrow may delay lifesaving care.
It can miss medication safety issues. Advice about prescription drugs, over-the-counter medicines, supplements, and dosing can be dangerous without a full medication list and medical history. Medication questions should generally go to a clinician or pharmacist.
It is not a reliable emergency triage tool. Consumer chatbots are not the same as a nurse line, poison center, urgent care, or emergency department.
Even companies offering health-focused chatbot features say their tools are not a substitute for professional care and should not be used to diagnose medical conditions. Associated Press reporting also quoted experts urging people not to rely on large language models for major medical decisions.
Privacy risks people may overlook
This may be the most underappreciated part of the story.
KFF found that most adults are worried about the privacy of medical information shared with AI tools. Even so, about four in ten people who used AI for physical or mental health said they had uploaded personal medical information into a chatbot or AI tool.
That can include lab reports, medication lists, family history, photos of rashes, mental health concerns, fertility information, wearable data, or full medical records.
Sharing information with your own clinic through a patient portal is not the same as pasting it into a consumer chatbot. A clinic portal is part of your healthcare relationship. A consumer tool may be operated by a separate company with its own privacy policy and data-sharing terms.
Federal guidance from the Office of the National Coordinator for Health Information Technology says people should check whether a health app or tool explains how information is protected, whether data are encrypted, where data are stored, whether the developer can access or exchange the information, and whether it may allow another person or company to use or sell that information.
Before you upload anything, ask yourself a practical question: would I be comfortable if this information were stored, reused, or shared more broadly than I expected?
If the answer is no, keep the details out of the chatbot and discuss them through your clinician’s office instead.
When to message a clinician, book a visit, or skip AI entirely
AI is most useful before or after care, not in place of it. A simple escalation checklist can help.
Message your clinician or care team if:
- You want help understanding a test result, diagnosis, or treatment plan.
- You are not sure you followed medication or discharge instructions correctly.
- You have a non-urgent side effect or a persistent symptom that is not improving.
- You want to confirm whether something an AI tool told you makes sense for your situation.
Book a visit if:
- You have a new symptom that keeps coming back.
- Your symptoms are affecting sleep, eating, work, school, or daily activities.
- You think you may need an exam, testing, or a medication change.
- You are making an important decision about treatment, surgery, pregnancy, mental health care, or a chronic condition.
Seek emergency care right away instead of using a chatbot if you have:
- Chest pain
- Trouble breathing
- Stroke warning signs such as face drooping, arm weakness, or slurred speech
- Severe allergic symptoms such as throat swelling, trouble swallowing, or wheezing
- Heavy bleeding, unconsciousness, or another obviously life-threatening emergency
- Suicidal thoughts, a mental health crisis, or immediate danger to yourself or someone else
For suicidal thoughts or emotional distress, federal mental health guidance says to call or text 988. In life-threatening situations, call 911 or go to the nearest emergency room.
The bottom line
Consumer AI chatbots can be helpful for organizing information, translating medical language, and preparing better questions. That is real value, especially for people trying to make sense of a confusing health system.
But they are not a diagnosis engine, not a medication safety expert, and not an emergency service.
New polling shows many Americans are already using these tools because they are fast and easy. The safer takeaway is not to avoid them entirely. It is to use them for the tasks they do reasonably well and hand off the high-stakes decisions to humans.
If you remember one rule, make it this one: let AI help you prepare for care, but do not let it replace care.
Sources
- KFF Tracking Poll on AI for Health Information and Advice
- Pew Research on AI health information convenience versus accuracy
- CDC Considerations for Generative AI in Public Health
- ASTP guide to using apps with your health records
- JAMA medical news on patient-facing AI tools
- AP explainer on asking AI chatbots for health advice
- Cdc
- Stroke
- Medlineplus
- Nimh
- AMA physician AI adoption survey release
- Medlineplus
This article is for general informational purposes only and is not medical advice. Research findings can be early, limited, or subject to change as new evidence emerges. For personal guidance, diagnosis, or treatment, consult a licensed clinician. For current outbreak or public health guidance, follow your local health department, the CDC, or another relevant public health authority.
