USING AI

When Patients Bring AI to the Exam Room

Understanding, partnering with, and guiding patient use of AI health tools.

~35 min read Patient communication
Core Question

How can you transform patient AI use from a clinical challenge into an opportunity for deeper engagement and better outcomes?

Prioritize This!

If you take one thing from this topic: start asking about AI use without judgment. Add this question to your routine: "Have you looked this up online or talked to any AI tools about it? I'd love to hear what you found."

This simple question—asked with genuine curiosity, not suspicion—opens dialogue, builds trust, and gives you insight into what your patient is thinking. Patients who feel judged for using AI will hide it. Patients who feel heard will share it.

Introduction: The New Reality

Your patient walks in with a printout from ChatGPT suggesting she might have lupus. Another asks why you didn't order the MRI that "the AI" recommended. A parent wants to know if the treatment plan you've outlined matches what Google's Gemini told them about their child's ear infection. These scenarios are no longer hypothetical—they're becoming routine.

According to a 2024 KFF poll, approximately 17% of U.S. adults now use AI chatbots at least monthly for health information and advice, with that figure climbing to 25% among adults under 30. A cross-sectional study published in the Journal of Medical Internet Research found that 21.5% of respondents had used ChatGPT specifically for online health information. The Elsevier Clinician of the Future 2025 survey reveals a striking mismatch: while patients increasingly turn to AI tools, only 16% of clinicians report using AI in direct decision-making, and just 30% feel they've received adequate training.

This topic examines the emerging landscape of patient AI use, how it manifests in clinical encounters, and—most importantly—how clinicians can transform what might feel like a challenge into an opportunity for deeper engagement and better outcomes.


Part 1: Understanding How Patients Use AI

Who Is Using AI for Health Information?

The demographic profile of health AI users challenges some assumptions. Research from ResearchMatch reveals that ChatGPT users for health information tend to be younger (average age 32.8 versus 39.1 for non-users), but—perhaps surprisingly—have lower rates of advanced degree attainment (49.9% versus 67% holding a bachelor's degree or higher). These users also show greater utilization of transient healthcare settings like emergency departments and urgent care.

This pattern suggests that AI health tools may be filling gaps where traditional primary care access falls short. Patients who lack a consistent primary care relationship, face longer wait times, or experience cost barriers may turn to AI as a more accessible—if imperfect—alternative.

What Are Patients Asking AI?

Patient AI queries cluster into several distinct categories:

A study examining AI chatbot use for perioperative health information found four key themes: patients prefer AI chatbots over traditional search engines for their conversational interface, they appreciate the improved accessibility and perceived quality of information, they sometimes prefer AI interaction over human interaction for sensitive questions, and they recognize that effective prompting skills matter for getting useful responses.

Why Patients Choose AI Over Traditional Sources

The appeal is understandable when you consider the patient's perspective. As one patient told the New York Times: "ChatGPT has all day for me—it never rushes me out of the chat." This reflects a fundamental tension in modern healthcare: the average primary care visit lasts about 18 minutes, with much of that time consumed by documentation and administrative tasks. AI offers unlimited time, immediate availability, and—crucially—no judgment.

Research on patient motivations identifies several drivers:

Research from NYU found that patients had difficulty distinguishing between AI-generated and physician-generated responses to health queries—correctly identifying the source only about 65% of the time. This suggests that AI responses are reaching a level of sophistication that makes them credible to patients, for better or worse.

Why Patients Trust AI—And What It Teaches Us

Here's an uncomfortable truth that the research reveals: AI chatbots often outperform physicians in perceived empathy. A landmark JAMA Internal Medicine study found that healthcare professionals preferred ChatGPT responses over physician responses 79% of the time. Even more striking: empathetic responses were 9.8 times more common from ChatGPT (45.1%) than from physicians (4.6%).

Why? The differences are instructive:

The Opportunity for Providers

This isn't about AI replacing physicians—it's about AI revealing what patients crave: time, validation, and explanation. When a patient turns to ChatGPT, they're often seeking what a rushed 18-minute visit couldn't provide.

The research suggests a path forward: physicians using AI to draft responses produced messages that were 2-3 times longer and rated more empathetic, while reporting decreased cognitive burden. AI can help us be more human, not less.

Ask yourself: When a patient brings AI research to your visit, are they challenging your expertise—or showing you what they needed that they didn't get elsewhere?

The Limitations Patients Don't See

While patients perceive AI as comprehensive and knowledgeable, several critical limitations remain invisible to non-expert users. First, AI tools like ChatGPT lack real-time medical knowledge—they're trained on data with a cutoff date and don't incorporate the latest research, drug alerts, or emerging treatment protocols. A patient asking about a recently approved medication may receive outdated or simply fabricated information.

Second, AI cannot perform a physical examination. The entire diagnostic process in medicine often hinges on findings that can only be obtained through direct observation and touch: the texture of a skin lesion, the quality of a heart murmur, the tenderness pattern of an abdomen. AI is limited to what patients describe—and patients often don't know what's clinically significant to mention.

Third, AI lacks the ability to integrate the subtle contextual cues that experienced clinicians process automatically: the patient's affect, their living situation, their reliability as a historian, the nonverbal signals that suggest whether symptoms are being minimized or amplified. A study by symptom checker researchers found that even with perfect information, AI-based tools listed the correct diagnosis in the top three options only 51% of the time, compared to 84% for physicians given the same vignettes.

Finally, AI can hallucinate—generating confident-sounding information that is entirely fabricated. Studies have documented AI chatbots inventing medical research, citing papers that don't exist, and recommending treatments with no evidence base. Patients have no way to distinguish hallucinated information from accurate information without independent verification.


Part 2: How AI Health Use Shows Up in Clinical Encounters

The Prepared Patient

Some patients arrive having done their AI homework in constructive ways. They've generated lists of questions, researched their symptoms, and come ready for an informed conversation. An internist at Beth Israel Deaconess described a patient who uploaded his lab results to ChatGPT and arrived with organized questions. "I welcome patients showing me how they use AI," he noted. "Their research creates an opportunity for discussion."

These encounters can be efficient and satisfying for both parties. The patient feels heard and prepared; the clinician can address specific concerns rather than starting from zero.

The Worried Well (Now More Worried)

AI chatbots, when prompted with symptoms, tend to generate comprehensive differential diagnoses—including rare and serious conditions. A patient with a headache might receive a list that includes tension headache, migraine, sinusitis, but also meningitis, brain tumor, and aneurysm. Without clinical context to weight these possibilities, patients may fixate on the most frightening options.

An emergency physician who tested ChatGPT on real patient presentations found it performed reasonably well for classic presentations with complete information, but noted: "Most actual patient cases are not classic. The vast majority of any medical encounter is figuring out the correct patient narrative." The AI missed an ectopic pregnancy in a patient presenting with abdominal pain—a potentially fatal diagnosis that required the nuanced history-taking that revealed she didn't know she was pregnant.

The Second Opinion Seeker

Increasingly, patients use AI to validate—or challenge—their physician's recommendations. This can manifest as straightforward questions ("ChatGPT said I should ask about X—what do you think?") or more confrontational challenges ("The AI says I don't need this medication—why are you prescribing it?").

A Customertimes survey found that 40% of Americans are willing to follow medical advice generated by AI. While most still expect AI to serve as a supportive tool rather than a physician replacement, the willingness to act on AI recommendations without professional input raises safety concerns.

The Vulnerable Patient

Perhaps most concerning: research published in JMIR found that patients failed to distinguish potentially harmful AI advice from safe advice. While physicians lowered their ratings for responses they identified as harmful, patients' assessments of empathy and usefulness remained unchanged. Most harmful responses involved either overtreatment/overdiagnosis or undertreatment/underdiagnosis—errors that require clinical expertise to recognize.

The study noted that "profound knowledge of the specific field is necessary to identify harmful advice"—such as knowing that gallstones greater than 3 cm are associated with increased cancer risk. Patients simply don't have the framework to evaluate clinical accuracy.

Recognizing AI-Influenced Encounters

Clinicians should listen for signals that a patient has consulted AI:

Consider simply asking: "Have you looked this up online or talked to any AI tools about it?" Normalizing the question removes stigma and opens dialogue.

Case Vignettes

The Worried Parent: A mother brings her 4-year-old with a rash. Before you can complete your assessment, she shows you a ChatGPT conversation suggesting possible Kawasaki disease, Stevens-Johnson syndrome, and scarlet fever. The child has a mild viral exanthem. Your task: validate her concern, explain how you assess probability, and demonstrate what clinical findings guide your thinking.

The Medication Skeptic: A patient with newly diagnosed hypertension returns for follow-up, having not filled the lisinopril prescription. He explains that ChatGPT told him about potential side effects and suggested "natural alternatives" first. Your task: acknowledge his research, explain risk-benefit in context, and have an honest conversation about evidence for alternatives.

The Informed Partner: A patient with a new cancer diagnosis arrives with her spouse, who has spent three days researching treatment options with AI. He's prepared a detailed spreadsheet comparing protocols. Some information is accurate; some reflects AI hallucination. Your task: honor their preparation, correct misinformation gently, and help them understand how decisions are made.

The Self-Diagnosed Teen: A 16-year-old has researched her symptoms on multiple AI platforms and is convinced she has ADHD. She's compiled supporting evidence and wants medication. Your task: take her concerns seriously while conducting appropriate evaluation and exploring alternatives.


Part 3: Partnering With AI-Using Patients

Reframe the Dynamic

The instinct to dismiss AI-derived information or feel threatened by it is understandable but counterproductive. Patients who research their health—through any medium—are demonstrating engagement. That's valuable.

Consider the analogy of a patient who arrives having read everything they could find about their diagnosis. We generally view this positively, even when their sources are imperfect. AI use is similar: it represents patients taking initiative in their healthcare. Our job is to help them do it effectively.

The "Curious Colleague" Approach

Rather than positioning yourself as the authority correcting AI mistakes, try approaching the conversation as a collaborative evaluation of information. Some phrases that work:

Addressing Discrepancies

When your recommendation differs from what the AI suggested, transparency is key:

  1. Acknowledge the AI's reasoning: "Based on symptoms alone, the AI's suggestion makes sense."
  2. Explain your additional data: "But I can see that your exam shows X, your history includes Y, and that changes the probability significantly."
  3. Make your reasoning visible: "Here's why I'm less worried about the scary diagnosis and more focused on..."
  4. Invite questions: "Does that help explain why I'm thinking differently? What other questions do you have?"

When the AI Was Right (Or Partially Right)

Sometimes patients bring AI-generated insights that are genuinely useful—a medication interaction you might have missed, a question about a relevant clinical trial, or a reasonable concern that warrants investigation. Acknowledge this:

This builds trust and models the appropriate use of AI as a supplement to—not replacement for—clinical judgment.

Communication Scripts for Common Situations

When AI information is accurate but incomplete

"The AI gave you accurate general information about [condition]. What it couldn't know is your specific situation—your other health conditions, your medications, your lifestyle. Let me fill in how this applies to you specifically, because the treatment approach really depends on those details."

When AI information is outdated

"That recommendation was probably accurate a few years ago. The guidelines have actually changed—we now know that [updated approach] is more effective/safer. AI tools don't always have the most current information, which is one reason it's important to verify things with your healthcare team."

When AI information is simply wrong

"I can see why that seemed convincing, but I need to correct something important. AI tools sometimes generate information that sounds authoritative but isn't accurate—it's called 'hallucination.' In this case, [correct information]. This is exactly why these tools work best as a starting point for questions rather than as a final answer."

When a patient wants to follow AI over your recommendation

"I respect that you've done a lot of research, and I want to make sure you understand my reasoning so you can make an informed choice. Here's why I'm recommending [approach] instead of what the AI suggested: [specific reasons]. Ultimately, this is your health and your decision. I want to make sure you have complete information, including what concerns me about the alternative approach and what we would watch for if you choose to go that route."

Managing Anxiety Amplified by AI

For patients whose AI research has increased their anxiety, the clinical approach matters:

  1. Validate the concern: "It's understandable that seeing 'brain tumor' on a list would be frightening."
  2. Explain probability: "AI lists everything possible, but doesn't tell you what's likely. For your symptoms, in your age group, with your exam, here's what's much more probable..."
  3. Provide concrete reassurance: "Here's specifically what I looked for on your exam that tells me we don't need to worry about X."
  4. Create a safety net: "If you develop any of these specific symptoms, that would change my thinking and you should call right away."

Part 4: Teaching Patients to Use AI More Effectively

Rather than discouraging AI use—which is unlikely to work and may damage trust—consider equipping patients with skills to use these tools more safely. This represents a form of digital health literacy that's becoming as important as understanding how to read a nutrition label.

Core Principles for Patients

1. AI Is a Starting Point, Not a Destination

Help patients understand that AI can be useful for generating questions, understanding terminology, or exploring possibilities—but it cannot examine them, know their full history, or make clinical decisions. The appropriate endpoint for serious health concerns is always a qualified professional.

2. Prompting Matters

Research shows that the quality of AI responses depends heavily on how questions are framed. Teaching patients basic prompting skills can improve the information they receive:

3. Privacy Isn't Guaranteed

Consumer AI tools are not HIPAA-compliant. Data entered into ChatGPT, Claude, or similar tools goes to companies that may use it for training or other purposes. Patients should avoid including identifying information (name, date of birth, Social Security number) in health queries.

4. AI Can Hallucinate

Large language models can generate confident-sounding but entirely fabricated information. Patients should be especially skeptical of specific claims (drug interactions, dosages, treatment protocols) and verify anything important with a professional or authoritative medical source.

5. Emergency Symptoms Require Emergency Care

AI is not appropriate for evaluating urgent symptoms. Patients should understand that chest pain, severe shortness of breath, neurological changes, or other potentially serious symptoms require immediate professional evaluation—not an AI conversation.

Practical Tips for Patient AI Use

Consider providing patients with these concrete strategies:

Better Ways to Use AI for Health Information
  • Understanding diagnoses: "Explain [condition] in simple terms—what causes it, how it's treated, and what to expect."
  • Preparing for appointments: "What questions should I ask my doctor about [condition/medication/test]?"
  • Understanding lab results: "What does a [specific test] measure and what do high/low values generally indicate?"
  • Medication information: "What are common side effects of [medication] and when should I be concerned?"
  • Learning terminology: "What does [medical term] mean in plain language?"
Riskier Ways to Use AI (Proceed With Caution)
  • Self-diagnosing based on symptom descriptions
  • Making treatment decisions without professional input
  • Relying on AI for medication dosing or interactions
  • Using AI during potential emergencies
  • Trusting AI recommendations over clinical advice

The "Verify Big Claims" Rule

As one physician advises: "If it tells you something that you think is really big, verify it with an expert human." This simple heuristic captures much of what patients need to know. AI can be useful for routine questions and background information. For anything significant—a worrisome symptom, a treatment decision, a major diagnosis—professional verification is essential.


Part 5: Creating AI Tools for Your Patients

Rather than hoping patients use general-purpose AI wisely, you can create custom AI tools tailored to your practice and patients. This gives you control over the information patients receive while meeting them where they are. No coding required—just the skills you've already developed in this course.

Why Create Patient-Facing AI Tools?

Ideas for Patient-Facing Custom GPTs

1

Post-Procedure Care Guide

Upload your post-operative instructions and create a GPT that answers patient questions about recovery:

  • "Is this amount of bruising normal after day 3?"
  • "Can I shower yet?"
  • "When should I be concerned about pain levels?"

Key instruction: "Always recommend calling the office or going to the ER for fever over 101°F, signs of infection, or uncontrolled pain."

2

New Diagnosis Companion

For patients newly diagnosed with chronic conditions (diabetes, hypertension, asthma), create a GPT loaded with your practice's educational materials:

  • Answers questions about the condition in plain language
  • Explains medications you commonly prescribe
  • Provides lifestyle guidance consistent with your recommendations
  • Generates questions for patients to bring to their next visit
3

Appointment Preparation Assistant

Help patients arrive prepared:

  • "What should I bring to my first visit?"
  • "How do I describe my symptoms clearly?"
  • "What questions should I ask about my test results?"

This improves visit efficiency and patient satisfaction.

How to Build It

If you've read the Everyday Ways to Use AI topic, you already have the skills. The process is identical:

  1. Choose your platform: Custom GPTs (ChatGPT Plus) or Claude Projects work well. Custom GPTs can be shared via link.
  2. Write clear instructions: Define the GPT's role, what it should and shouldn't discuss, and when to recommend professional care.
  3. Upload your materials: Patient handouts, FAQs, post-procedure instructions, condition guides.
  4. Test thoroughly: Try to break it. Ask questions that should trigger safety warnings.
  5. Share with patients: Provide the link at checkout, in follow-up emails, or on your patient portal.
Important Considerations
  • Not for emergencies: Make clear in your instructions that emergencies require 911 or the ER, not a chatbot
  • Not a replacement for care: The GPT should encourage—not replace—communication with your team
  • Privacy: Standard ChatGPT isn't HIPAA-compliant. Instruct patients not to enter identifying information. For HIPAA-compliant options, consider enterprise healthcare AI platforms
  • Review and update: Medical information changes. Schedule regular reviews of your GPT's knowledge base
Start Simple

You don't need to build something comprehensive. Start with a single use case—like post-procedure questions for one common procedure—and expand from there. A narrow, well-designed tool is more useful than a broad, mediocre one.


Part 6: Special Populations and Considerations

Equity Considerations

The intersection of AI health tools and healthcare disparities deserves attention. AI use both reflects and potentially exacerbates existing inequities.

The finding that AI health tool users have lower educational attainment and rely more on emergency/urgent care settings suggests these tools may serve as a workaround for patients with less access to traditional primary care. According to the American Hospital Association, over 26% of Medicare beneficiaries lack home computer or smartphone access, and more than 22% of rural Americans lack adequate broadband.

AI systems are trained on data that reflects historical patterns—including healthcare disparities. Populations that have been historically marginalized are underrepresented in training datasets, potentially leading to less accurate or relevant information for these patients.

Pediatric and Adolescent Considerations

Parents researching their children's health present unique dynamics:

Mental Health Specific Concerns

AI use for mental health information deserves particular attention:

Chronic Disease Management

Patients with chronic conditions present particular opportunities and risks. These patients have ongoing need for information and may make daily self-management decisions.

Guidance for chronic disease patients using AI:


Part 6: The Opportunity

Patient AI use isn't going away. The question for clinicians is whether to view it as an obstacle or an opening.

Patients who consult AI are engaged in their health. They're taking initiative, seeking information, and thinking about their conditions between visits. This is exactly what we want from patients—active participation in their healthcare.

AI can surface questions patients wouldn't otherwise ask. When a patient brings an AI-generated list of considerations, they may raise issues that wouldn't have come up in a routine visit.

Teaching AI literacy is a form of patient education. Helping patients use AI tools effectively reinforces critical thinking about health information generally.

Transparency about AI builds trust. When clinicians openly discuss AI's strengths and limitations—rather than dismissing it—they demonstrate honesty and respect for patients' autonomy.

Building AI Literacy Into Your Practice

Consider how you might systematically address AI use:

Documentation Considerations

When patient AI use is relevant to clinical decision-making, consider documenting it:


Quick Reference: When Patients Bring AI to the Visit

Scenario Approach
Patient is prepared and informed Acknowledge their preparation; use as foundation for discussion; identify areas needing clarification
Patient is anxious about AI findings Validate concern; explain probability vs. possibility; show exam findings that provide reassurance; create safety net
AI recommendation differs from yours Acknowledge AI logic; explain your additional data (exam, history); make reasoning transparent; invite questions
AI raised a valid point Acknowledge openly; investigate if warranted; model appropriate use of AI as supplement to clinical judgment
Patient challenging your recommendation Stay curious not defensive; explore what the AI said; explain clinical reasoning; respect patient autonomy while ensuring informed decision

Key Messages for Patients

  1. AI is a starting point, not a diagnosis.
  2. Better prompts yield better information.
  3. Your data isn't private—avoid identifying information.
  4. AI can hallucinate—verify important claims.
  5. Emergency symptoms need emergency care, not AI.
The Bottom Line

Patients who consult AI are engaged in their health. Meet them where they are, help them use these tools wisely, and leverage their research as an opportunity for deeper conversation and better care.


Resources for Further Learning

Essential Reading

JAMA Internal Medicine, 2023 · The landmark study showing ChatGPT responses were preferred 79% of the time and rated 9.8x more empathetic than physician responses.
STAT News, October 2025 · Why "tell me about your use of chatbots" should become a standard question at appointments.
STAT News, September 2025 · How chatbots are changing patient narratives: "patients aren't just arriving with facts—they're arriving with shaped, rehearsed stories."
Mount Sinai, 2025 · Study finding AI chatbots are highly vulnerable to repeating and elaborating on false medical information.
Washington Post, October 2025 · Patient-focused guidance you can share with your patients.

Podcasts

Explores why empathy—not memorization—may become the most valuable clinical skill in the AI era. Episodes feature discussions about patient-AI interactions and the future of clinical conversations.
Jonathan Chen, MD, PhD and Michael Pfeffer discuss how AI is changing physician-patient interactions, including research on chatbot-assisted clinical decisions.
Microsoft Research Podcast · Dr. Murray from UCSF discusses AI's integration into clinical workflows, including patient messaging and empathetic communication.
Digital health podcast exploring how AI is reshaping doctor-patient interactions and the future of healthcare communication.

For Your Patients

NIH's trusted patient health information—recommend as a verification source for AI-generated health claims.
Research and resources on patient access to clinical notes—context for discussions about AI interpretation of medical records.

Building Patient-Facing AI Tools

PubMed, 2024 · Practical guide to building custom GPTs—applies directly to patient-facing tools.

Patient-Facing AI Health Tools

Beyond general-purpose chatbots like ChatGPT and Gemini, a growing category of purpose-built AI health tools is emerging specifically for patient use. These tools aim to provide more clinically appropriate responses than general AI, often with built-in safeguards, physician oversight, or connections to care. Patients may mention using these tools, and it's worth understanding what they offer.

Important Context

These tools are not endorsements. The patient-facing AI health space is evolving rapidly, with new entrants and changing features. Encourage patients to discuss any AI-derived guidance with their healthcare team, regardless of the source.

Purpose-Built Patient Health AI

Tool What It Does Key Features
Meet Virginia AI tool for appointment preparation
  • Helps patients prepare and organize information before visits
  • Structures symptoms, concerns, and questions for appointments
  • Designed to make clinical encounters more productive
Counsel Health AI chatbot with on-demand physician access
  • Medical-grade AI collects history and provides initial guidance
  • Escalates to live telehealth with a physician when needed ($29)
  • Emergency detection for urgent symptoms
  • HIPAA and SOC 2 compliant
  • Can connect medical records for personalized responses
My Doctor Friend AI health copilot for symptom tracking and guidance
  • Tracks symptoms from onset through recovery
  • Adjusts guidance as symptoms evolve
  • Helps determine when to seek care
  • iOS app in beta
Ada Health Established AI symptom assessment app
  • Conversational symptom checker
  • Provides possible conditions ranked by likelihood
  • Widely used (millions of downloads)
  • Integrates with some health systems
Ubie AI symptom checker with care navigation
  • Considers age, medical history, and lifestyle factors
  • Generates symptom summary for doctor visits
  • 71.6% top-10 diagnostic accuracy in testing
Docus AI health platform with specialist access
  • AI symptom checker and lab interpretation
  • Option for second opinions from specialists
  • Health report generation

What Makes These Different From ChatGPT?

Purpose-built health AI tools typically offer several advantages over general-purpose chatbots:

That said, these tools still share core limitations with all AI: they cannot perform physical examinations, they may miss atypical presentations, and they work best as a complement to—not replacement for—professional care.

Talking With Patients About These Tools

When patients mention using purpose-built health AI:

The Emerging Landscape

This space is evolving rapidly. New tools launch regularly, existing tools add features, and some will inevitably disappear. The core principles remain constant: AI tools can help patients prepare, understand, and engage—but they work best when integrated with, not substituted for, professional clinical care.

Learning Objectives

After completing this topic, you should be able to:

  • Recognize patterns of patient AI use in clinical encounters
  • Apply communication strategies for partnering with AI-using patients
  • Address discrepancies between AI-generated information and clinical recommendations
  • Teach patients to use AI health tools more effectively and safely
  • Identify special considerations for vulnerable populations
  • Transform patient AI use into opportunities for engagement and education