When Patients Bring AI to the Exam Room
Understanding, partnering with, and guiding patient use of AI health tools.
How can you transform patient AI use from a clinical challenge into an opportunity for deeper engagement and better outcomes?
If you take one thing from this topic: start asking about AI use without judgment. Add this question to your routine: "Have you looked this up online or talked to any AI tools about it? I'd love to hear what you found."
This simple question—asked with genuine curiosity, not suspicion—opens dialogue, builds trust, and gives you insight into what your patient is thinking. Patients who feel judged for using AI will hide it. Patients who feel heard will share it.
Introduction: The New Reality
Your patient walks in with a printout from ChatGPT suggesting she might have lupus. Another asks why you didn't order the MRI that "the AI" recommended. A parent wants to know if the treatment plan you've outlined matches what Google's Gemini told them about their child's ear infection. These scenarios are no longer hypothetical—they're becoming routine.
According to a 2024 KFF poll, approximately 17% of U.S. adults now use AI chatbots at least monthly for health information and advice, with that figure climbing to 25% among adults under 30. A cross-sectional study published in the Journal of Medical Internet Research found that 21.5% of respondents had used ChatGPT specifically for online health information. The Elsevier Clinician of the Future 2025 survey reveals a striking mismatch: while patients increasingly turn to AI tools, only 16% of clinicians report using AI in direct decision-making, and just 30% feel they've received adequate training.
This topic examines the emerging landscape of patient AI use, how it manifests in clinical encounters, and—most importantly—how clinicians can transform what might feel like a challenge into an opportunity for deeper engagement and better outcomes.
Part 1: Understanding How Patients Use AI
Who Is Using AI for Health Information?
The demographic profile of health AI users challenges some assumptions. Research from ResearchMatch reveals that ChatGPT users for health information tend to be younger (average age 32.8 versus 39.1 for non-users), but—perhaps surprisingly—have lower rates of advanced degree attainment (49.9% versus 67% holding a bachelor's degree or higher). These users also show greater utilization of transient healthcare settings like emergency departments and urgent care.
This pattern suggests that AI health tools may be filling gaps where traditional primary care access falls short. Patients who lack a consistent primary care relationship, face longer wait times, or experience cost barriers may turn to AI as a more accessible—if imperfect—alternative.
What Are Patients Asking AI?
Patient AI queries cluster into several distinct categories:
- Symptom interpretation and self-diagnosis: "I have a headache behind my eyes and a stiff neck—what could this be?"
- Lab result explanation: Uploading MyChart screenshots and asking what their numbers mean
- Medication questions: Side effects, interactions, whether a prescribed medication is "really necessary"
- Treatment validation: Checking whether their doctor's recommendation aligns with what the AI suggests
- Understanding diagnoses: "What is rheumatoid arthritis and what should I expect?"
- Preparing for appointments: Generating questions to ask, understanding what tests might be ordered
A study examining AI chatbot use for perioperative health information found four key themes: patients prefer AI chatbots over traditional search engines for their conversational interface, they appreciate the improved accessibility and perceived quality of information, they sometimes prefer AI interaction over human interaction for sensitive questions, and they recognize that effective prompting skills matter for getting useful responses.
Why Patients Choose AI Over Traditional Sources
The appeal is understandable when you consider the patient's perspective. As one patient told the New York Times: "ChatGPT has all day for me—it never rushes me out of the chat." This reflects a fundamental tension in modern healthcare: the average primary care visit lasts about 18 minutes, with much of that time consumed by documentation and administrative tasks. AI offers unlimited time, immediate availability, and—crucially—no judgment.
Research on patient motivations identifies several drivers:
- Accessibility: Available 24/7, no appointment needed, no insurance required
- Perceived comprehensiveness: AI provides longer, more detailed explanations than time-constrained clinicians often can
- Reduced stigma: For sensitive topics (mental health, sexual health, embarrassing symptoms), AI feels "safer"
- Cost: Free or low-cost compared to a physician visit
- Control: Patients can ask follow-up questions without worrying about taking too much time
Research from NYU found that patients had difficulty distinguishing between AI-generated and physician-generated responses to health queries—correctly identifying the source only about 65% of the time. This suggests that AI responses are reaching a level of sophistication that makes them credible to patients, for better or worse.
Why Patients Trust AI—And What It Teaches Us
Here's an uncomfortable truth that the research reveals: AI chatbots often outperform physicians in perceived empathy. A landmark JAMA Internal Medicine study found that healthcare professionals preferred ChatGPT responses over physician responses 79% of the time. Even more striking: empathetic responses were 9.8 times more common from ChatGPT (45.1%) than from physicians (4.6%).
Why? The differences are instructive:
- Length: ChatGPT responses averaged 168-245 words; physician responses averaged 17-62 words
- Validation: AI consistently acknowledges patient concerns before providing information
- Patience: AI never rushes, never seems annoyed by "obvious" questions, never makes patients feel burdensome
- Availability: AI is there at 2 AM when anxiety peaks, with no wait time or cost
This isn't about AI replacing physicians—it's about AI revealing what patients crave: time, validation, and explanation. When a patient turns to ChatGPT, they're often seeking what a rushed 18-minute visit couldn't provide.
The research suggests a path forward: physicians using AI to draft responses produced messages that were 2-3 times longer and rated more empathetic, while reporting decreased cognitive burden. AI can help us be more human, not less.
Ask yourself: When a patient brings AI research to your visit, are they challenging your expertise—or showing you what they needed that they didn't get elsewhere?
The Limitations Patients Don't See
While patients perceive AI as comprehensive and knowledgeable, several critical limitations remain invisible to non-expert users. First, AI tools like ChatGPT lack real-time medical knowledge—they're trained on data with a cutoff date and don't incorporate the latest research, drug alerts, or emerging treatment protocols. A patient asking about a recently approved medication may receive outdated or simply fabricated information.
Second, AI cannot perform a physical examination. The entire diagnostic process in medicine often hinges on findings that can only be obtained through direct observation and touch: the texture of a skin lesion, the quality of a heart murmur, the tenderness pattern of an abdomen. AI is limited to what patients describe—and patients often don't know what's clinically significant to mention.
Third, AI lacks the ability to integrate the subtle contextual cues that experienced clinicians process automatically: the patient's affect, their living situation, their reliability as a historian, the nonverbal signals that suggest whether symptoms are being minimized or amplified. A study by symptom checker researchers found that even with perfect information, AI-based tools listed the correct diagnosis in the top three options only 51% of the time, compared to 84% for physicians given the same vignettes.
Finally, AI can hallucinate—generating confident-sounding information that is entirely fabricated. Studies have documented AI chatbots inventing medical research, citing papers that don't exist, and recommending treatments with no evidence base. Patients have no way to distinguish hallucinated information from accurate information without independent verification.
Part 2: How AI Health Use Shows Up in Clinical Encounters
The Prepared Patient
Some patients arrive having done their AI homework in constructive ways. They've generated lists of questions, researched their symptoms, and come ready for an informed conversation. An internist at Beth Israel Deaconess described a patient who uploaded his lab results to ChatGPT and arrived with organized questions. "I welcome patients showing me how they use AI," he noted. "Their research creates an opportunity for discussion."
These encounters can be efficient and satisfying for both parties. The patient feels heard and prepared; the clinician can address specific concerns rather than starting from zero.
The Worried Well (Now More Worried)
AI chatbots, when prompted with symptoms, tend to generate comprehensive differential diagnoses—including rare and serious conditions. A patient with a headache might receive a list that includes tension headache, migraine, sinusitis, but also meningitis, brain tumor, and aneurysm. Without clinical context to weight these possibilities, patients may fixate on the most frightening options.
An emergency physician who tested ChatGPT on real patient presentations found it performed reasonably well for classic presentations with complete information, but noted: "Most actual patient cases are not classic. The vast majority of any medical encounter is figuring out the correct patient narrative." The AI missed an ectopic pregnancy in a patient presenting with abdominal pain—a potentially fatal diagnosis that required the nuanced history-taking that revealed she didn't know she was pregnant.
The Second Opinion Seeker
Increasingly, patients use AI to validate—or challenge—their physician's recommendations. This can manifest as straightforward questions ("ChatGPT said I should ask about X—what do you think?") or more confrontational challenges ("The AI says I don't need this medication—why are you prescribing it?").
A Customertimes survey found that 40% of Americans are willing to follow medical advice generated by AI. While most still expect AI to serve as a supportive tool rather than a physician replacement, the willingness to act on AI recommendations without professional input raises safety concerns.
The Vulnerable Patient
Perhaps most concerning: research published in JMIR found that patients failed to distinguish potentially harmful AI advice from safe advice. While physicians lowered their ratings for responses they identified as harmful, patients' assessments of empathy and usefulness remained unchanged. Most harmful responses involved either overtreatment/overdiagnosis or undertreatment/underdiagnosis—errors that require clinical expertise to recognize.
The study noted that "profound knowledge of the specific field is necessary to identify harmful advice"—such as knowing that gallstones greater than 3 cm are associated with increased cancer risk. Patients simply don't have the framework to evaluate clinical accuracy.
Recognizing AI-Influenced Encounters
Clinicians should listen for signals that a patient has consulted AI:
- Technical or clinical terminology that seems memorized rather than understood
- References to "reading" about their condition that sound conversational rather than like standard medical websites
- Questions about specific diagnostic possibilities that seem out of proportion to their symptoms
- Requests for tests or treatments "because the AI said..."
- Printed or screenshotted AI conversations
Consider simply asking: "Have you looked this up online or talked to any AI tools about it?" Normalizing the question removes stigma and opens dialogue.
The Worried Parent: A mother brings her 4-year-old with a rash. Before you can complete your assessment, she shows you a ChatGPT conversation suggesting possible Kawasaki disease, Stevens-Johnson syndrome, and scarlet fever. The child has a mild viral exanthem. Your task: validate her concern, explain how you assess probability, and demonstrate what clinical findings guide your thinking.
The Medication Skeptic: A patient with newly diagnosed hypertension returns for follow-up, having not filled the lisinopril prescription. He explains that ChatGPT told him about potential side effects and suggested "natural alternatives" first. Your task: acknowledge his research, explain risk-benefit in context, and have an honest conversation about evidence for alternatives.
The Informed Partner: A patient with a new cancer diagnosis arrives with her spouse, who has spent three days researching treatment options with AI. He's prepared a detailed spreadsheet comparing protocols. Some information is accurate; some reflects AI hallucination. Your task: honor their preparation, correct misinformation gently, and help them understand how decisions are made.
The Self-Diagnosed Teen: A 16-year-old has researched her symptoms on multiple AI platforms and is convinced she has ADHD. She's compiled supporting evidence and wants medication. Your task: take her concerns seriously while conducting appropriate evaluation and exploring alternatives.
Part 3: Partnering With AI-Using Patients
Reframe the Dynamic
The instinct to dismiss AI-derived information or feel threatened by it is understandable but counterproductive. Patients who research their health—through any medium—are demonstrating engagement. That's valuable.
Consider the analogy of a patient who arrives having read everything they could find about their diagnosis. We generally view this positively, even when their sources are imperfect. AI use is similar: it represents patients taking initiative in their healthcare. Our job is to help them do it effectively.
The "Curious Colleague" Approach
Rather than positioning yourself as the authority correcting AI mistakes, try approaching the conversation as a collaborative evaluation of information. Some phrases that work:
- "That's interesting—let's look at what the AI told you and see how it applies to your specific situation."
- "AI can be helpful for general information. What it can't do is examine you or know your full history. Let me show you what I'm seeing that changes the picture."
- "You've done some good research. Let me help you figure out which parts apply to you."
- "The AI gave you a good starting list. Here's how I'm thinking about narrowing it down based on your exam."
Addressing Discrepancies
When your recommendation differs from what the AI suggested, transparency is key:
- Acknowledge the AI's reasoning: "Based on symptoms alone, the AI's suggestion makes sense."
- Explain your additional data: "But I can see that your exam shows X, your history includes Y, and that changes the probability significantly."
- Make your reasoning visible: "Here's why I'm less worried about the scary diagnosis and more focused on..."
- Invite questions: "Does that help explain why I'm thinking differently? What other questions do you have?"
When the AI Was Right (Or Partially Right)
Sometimes patients bring AI-generated insights that are genuinely useful—a medication interaction you might have missed, a question about a relevant clinical trial, or a reasonable concern that warrants investigation. Acknowledge this:
- "Good catch—that's worth checking."
- "The AI actually raised a reasonable point. Let me look into that."
- "I hadn't considered that angle—I'm glad you brought it up."
This builds trust and models the appropriate use of AI as a supplement to—not replacement for—clinical judgment.
Communication Scripts for Common Situations
"The AI gave you accurate general information about [condition]. What it couldn't know is your specific situation—your other health conditions, your medications, your lifestyle. Let me fill in how this applies to you specifically, because the treatment approach really depends on those details."
"That recommendation was probably accurate a few years ago. The guidelines have actually changed—we now know that [updated approach] is more effective/safer. AI tools don't always have the most current information, which is one reason it's important to verify things with your healthcare team."
"I can see why that seemed convincing, but I need to correct something important. AI tools sometimes generate information that sounds authoritative but isn't accurate—it's called 'hallucination.' In this case, [correct information]. This is exactly why these tools work best as a starting point for questions rather than as a final answer."
"I respect that you've done a lot of research, and I want to make sure you understand my reasoning so you can make an informed choice. Here's why I'm recommending [approach] instead of what the AI suggested: [specific reasons]. Ultimately, this is your health and your decision. I want to make sure you have complete information, including what concerns me about the alternative approach and what we would watch for if you choose to go that route."
Managing Anxiety Amplified by AI
For patients whose AI research has increased their anxiety, the clinical approach matters:
- Validate the concern: "It's understandable that seeing 'brain tumor' on a list would be frightening."
- Explain probability: "AI lists everything possible, but doesn't tell you what's likely. For your symptoms, in your age group, with your exam, here's what's much more probable..."
- Provide concrete reassurance: "Here's specifically what I looked for on your exam that tells me we don't need to worry about X."
- Create a safety net: "If you develop any of these specific symptoms, that would change my thinking and you should call right away."
Part 4: Teaching Patients to Use AI More Effectively
Rather than discouraging AI use—which is unlikely to work and may damage trust—consider equipping patients with skills to use these tools more safely. This represents a form of digital health literacy that's becoming as important as understanding how to read a nutrition label.
Core Principles for Patients
1. AI Is a Starting Point, Not a Destination
Help patients understand that AI can be useful for generating questions, understanding terminology, or exploring possibilities—but it cannot examine them, know their full history, or make clinical decisions. The appropriate endpoint for serious health concerns is always a qualified professional.
2. Prompting Matters
Research shows that the quality of AI responses depends heavily on how questions are framed. Teaching patients basic prompting skills can improve the information they receive:
- Include relevant personal context (age, relevant medical history, current medications)
- Ask the AI to "act as a clinician" or "explain as if to a patient"
- Ask one focused question at a time rather than multiple questions
- Request that the AI explain its reasoning
- Ask for counterarguments or alternative explanations
3. Privacy Isn't Guaranteed
Consumer AI tools are not HIPAA-compliant. Data entered into ChatGPT, Claude, or similar tools goes to companies that may use it for training or other purposes. Patients should avoid including identifying information (name, date of birth, Social Security number) in health queries.
4. AI Can Hallucinate
Large language models can generate confident-sounding but entirely fabricated information. Patients should be especially skeptical of specific claims (drug interactions, dosages, treatment protocols) and verify anything important with a professional or authoritative medical source.
5. Emergency Symptoms Require Emergency Care
AI is not appropriate for evaluating urgent symptoms. Patients should understand that chest pain, severe shortness of breath, neurological changes, or other potentially serious symptoms require immediate professional evaluation—not an AI conversation.
Practical Tips for Patient AI Use
Consider providing patients with these concrete strategies:
- Understanding diagnoses: "Explain [condition] in simple terms—what causes it, how it's treated, and what to expect."
- Preparing for appointments: "What questions should I ask my doctor about [condition/medication/test]?"
- Understanding lab results: "What does a [specific test] measure and what do high/low values generally indicate?"
- Medication information: "What are common side effects of [medication] and when should I be concerned?"
- Learning terminology: "What does [medical term] mean in plain language?"
- Self-diagnosing based on symptom descriptions
- Making treatment decisions without professional input
- Relying on AI for medication dosing or interactions
- Using AI during potential emergencies
- Trusting AI recommendations over clinical advice
The "Verify Big Claims" Rule
As one physician advises: "If it tells you something that you think is really big, verify it with an expert human." This simple heuristic captures much of what patients need to know. AI can be useful for routine questions and background information. For anything significant—a worrisome symptom, a treatment decision, a major diagnosis—professional verification is essential.
Part 5: Creating AI Tools for Your Patients
Rather than hoping patients use general-purpose AI wisely, you can create custom AI tools tailored to your practice and patients. This gives you control over the information patients receive while meeting them where they are. No coding required—just the skills you've already developed in this course.
Why Create Patient-Facing AI Tools?
- Control the information: Your custom GPT can only reference your approved materials, protocols, and guidance
- Reduce after-hours burden: Patients get immediate answers to routine questions without calling the on-call line
- Consistent messaging: Every patient gets the same accurate information, in your voice
- Extend your care: Patients feel supported between visits
- Triage appropriately: Build in guidance about when to seek urgent care vs. wait for an appointment
Ideas for Patient-Facing Custom GPTs
Post-Procedure Care Guide
Upload your post-operative instructions and create a GPT that answers patient questions about recovery:
- "Is this amount of bruising normal after day 3?"
- "Can I shower yet?"
- "When should I be concerned about pain levels?"
Key instruction: "Always recommend calling the office or going to the ER for fever over 101°F, signs of infection, or uncontrolled pain."
New Diagnosis Companion
For patients newly diagnosed with chronic conditions (diabetes, hypertension, asthma), create a GPT loaded with your practice's educational materials:
- Answers questions about the condition in plain language
- Explains medications you commonly prescribe
- Provides lifestyle guidance consistent with your recommendations
- Generates questions for patients to bring to their next visit
Appointment Preparation Assistant
Help patients arrive prepared:
- "What should I bring to my first visit?"
- "How do I describe my symptoms clearly?"
- "What questions should I ask about my test results?"
This improves visit efficiency and patient satisfaction.
How to Build It
If you've read the Everyday Ways to Use AI topic, you already have the skills. The process is identical:
- Choose your platform: Custom GPTs (ChatGPT Plus) or Claude Projects work well. Custom GPTs can be shared via link.
- Write clear instructions: Define the GPT's role, what it should and shouldn't discuss, and when to recommend professional care.
- Upload your materials: Patient handouts, FAQs, post-procedure instructions, condition guides.
- Test thoroughly: Try to break it. Ask questions that should trigger safety warnings.
- Share with patients: Provide the link at checkout, in follow-up emails, or on your patient portal.
- Not for emergencies: Make clear in your instructions that emergencies require 911 or the ER, not a chatbot
- Not a replacement for care: The GPT should encourage—not replace—communication with your team
- Privacy: Standard ChatGPT isn't HIPAA-compliant. Instruct patients not to enter identifying information. For HIPAA-compliant options, consider enterprise healthcare AI platforms
- Review and update: Medical information changes. Schedule regular reviews of your GPT's knowledge base
You don't need to build something comprehensive. Start with a single use case—like post-procedure questions for one common procedure—and expand from there. A narrow, well-designed tool is more useful than a broad, mediocre one.
Part 6: Special Populations and Considerations
Equity Considerations
The intersection of AI health tools and healthcare disparities deserves attention. AI use both reflects and potentially exacerbates existing inequities.
The finding that AI health tool users have lower educational attainment and rely more on emergency/urgent care settings suggests these tools may serve as a workaround for patients with less access to traditional primary care. According to the American Hospital Association, over 26% of Medicare beneficiaries lack home computer or smartphone access, and more than 22% of rural Americans lack adequate broadband.
AI systems are trained on data that reflects historical patterns—including healthcare disparities. Populations that have been historically marginalized are underrepresented in training datasets, potentially leading to less accurate or relevant information for these patients.
Pediatric and Adolescent Considerations
Parents researching their children's health present unique dynamics:
- Developmental information: Parents frequently query AI about milestones, autism spectrum features, ADHD symptoms. AI may provide generic checklists without context about normal variation.
- Adolescent self-diagnosis: Teenagers increasingly use AI to research mental health conditions and physical symptoms. This can represent healthy self-awareness or lead to premature self-labeling.
- Privacy for adolescents: Teens may use AI for health questions they're uncomfortable asking parents or physicians. Consider asking adolescents directly about AI health use in confidential portions of visits.
Mental Health Specific Concerns
AI use for mental health information deserves particular attention:
- AI tools may underestimate suicide risk, with some studies showing AI failing to appropriately escalate urgent safety concerns
- Self-diagnosis of personality disorders or complex trauma can be harmful without professional assessment
- AI-generated coping strategies may be generic and miss patient-specific contraindications
- Patients in crisis should not be relying on AI chatbots—ensure patients know crisis resources
Chronic Disease Management
Patients with chronic conditions present particular opportunities and risks. These patients have ongoing need for information and may make daily self-management decisions.
Guidance for chronic disease patients using AI:
- Use AI for understanding and tracking, not for treatment changes
- Bring AI-generated questions to your clinical team
- Never adjust insulin, anticoagulation, or other high-risk medications based on AI advice
- Use AI to help communicate symptoms accurately to your healthcare team
Part 6: The Opportunity
Patient AI use isn't going away. The question for clinicians is whether to view it as an obstacle or an opening.
Patients who consult AI are engaged in their health. They're taking initiative, seeking information, and thinking about their conditions between visits. This is exactly what we want from patients—active participation in their healthcare.
AI can surface questions patients wouldn't otherwise ask. When a patient brings an AI-generated list of considerations, they may raise issues that wouldn't have come up in a routine visit.
Teaching AI literacy is a form of patient education. Helping patients use AI tools effectively reinforces critical thinking about health information generally.
Transparency about AI builds trust. When clinicians openly discuss AI's strengths and limitations—rather than dismissing it—they demonstrate honesty and respect for patients' autonomy.
Building AI Literacy Into Your Practice
Consider how you might systematically address AI use:
- Intake forms: Add a question about what resources patients have consulted, including AI tools
- Waiting room materials: Provide guidance on using AI tools effectively and their limitations
- Visit routine: Normalize asking about AI use as part of history-taking
- After-visit materials: Recommend trusted resources as alternatives to unvetted AI queries
- Staff training: Ensure all team members can respond to patient questions consistently
Documentation Considerations
When patient AI use is relevant to clinical decision-making, consider documenting it:
- Note AI-influenced concerns: "Patient expressed concern about [condition] based on AI chatbot research. Discussed differential diagnosis and clinical reasoning."
- Document education provided: "Discussed appropriate use of AI health tools, including limitations."
- Record disagreements: "Patient preferred alternative approach based on AI recommendation. Discussed risks and benefits."
Quick Reference: When Patients Bring AI to the Visit
| Scenario | Approach |
|---|---|
| Patient is prepared and informed | Acknowledge their preparation; use as foundation for discussion; identify areas needing clarification |
| Patient is anxious about AI findings | Validate concern; explain probability vs. possibility; show exam findings that provide reassurance; create safety net |
| AI recommendation differs from yours | Acknowledge AI logic; explain your additional data (exam, history); make reasoning transparent; invite questions |
| AI raised a valid point | Acknowledge openly; investigate if warranted; model appropriate use of AI as supplement to clinical judgment |
| Patient challenging your recommendation | Stay curious not defensive; explore what the AI said; explain clinical reasoning; respect patient autonomy while ensuring informed decision |
Key Messages for Patients
- AI is a starting point, not a diagnosis.
- Better prompts yield better information.
- Your data isn't private—avoid identifying information.
- AI can hallucinate—verify important claims.
- Emergency symptoms need emergency care, not AI.
Patients who consult AI are engaged in their health. Meet them where they are, help them use these tools wisely, and leverage their research as an opportunity for deeper conversation and better care.
Resources for Further Learning
Essential Reading
Podcasts
For Your Patients
Building Patient-Facing AI Tools
Patient-Facing AI Health Tools
Beyond general-purpose chatbots like ChatGPT and Gemini, a growing category of purpose-built AI health tools is emerging specifically for patient use. These tools aim to provide more clinically appropriate responses than general AI, often with built-in safeguards, physician oversight, or connections to care. Patients may mention using these tools, and it's worth understanding what they offer.
These tools are not endorsements. The patient-facing AI health space is evolving rapidly, with new entrants and changing features. Encourage patients to discuss any AI-derived guidance with their healthcare team, regardless of the source.
Purpose-Built Patient Health AI
| Tool | What It Does | Key Features |
|---|---|---|
| Meet Virginia | AI tool for appointment preparation |
|
| Counsel Health | AI chatbot with on-demand physician access |
|
| My Doctor Friend | AI health copilot for symptom tracking and guidance |
|
| Ada Health | Established AI symptom assessment app |
|
| Ubie | AI symptom checker with care navigation |
|
| Docus | AI health platform with specialist access |
|
What Makes These Different From ChatGPT?
Purpose-built health AI tools typically offer several advantages over general-purpose chatbots:
- Clinical guardrails: Built-in protocols for recognizing emergencies and escalating appropriately
- Structured symptom collection: Systematic approaches that gather relevant clinical information
- Physician oversight: Some tools have physicians reviewing outputs or available for escalation
- Healthcare compliance: Many are HIPAA-compliant, unlike consumer chatbots
- Care connections: Pathways to telehealth, appointments, or specialist consultations
That said, these tools still share core limitations with all AI: they cannot perform physical examinations, they may miss atypical presentations, and they work best as a complement to—not replacement for—professional care.
Talking With Patients About These Tools
When patients mention using purpose-built health AI:
- Ask what they learned: "What did the app tell you? Let's see how that fits with what I'm finding."
- Validate appropriate use: "Using that to organize your symptoms before coming in was helpful."
- Clarify limitations: "These tools are good at generating possibilities, but they can't examine you or know your full history."
- Distinguish from diagnosis: "The app gave you a list of possibilities—let me help narrow that down based on your exam."
This space is evolving rapidly. New tools launch regularly, existing tools add features, and some will inevitably disappear. The core principles remain constant: AI tools can help patients prepare, understand, and engage—but they work best when integrated with, not substituted for, professional clinical care.
Learning Objectives
After completing this topic, you should be able to:
- Recognize patterns of patient AI use in clinical encounters
- Apply communication strategies for partnering with AI-using patients
- Address discrepancies between AI-generated information and clinical recommendations
- Teach patients to use AI health tools more effectively and safely
- Identify special considerations for vulnerable populations
- Transform patient AI use into opportunities for engagement and education