FOUNDATIONS · EXERCISE

The Prompt Matters

A side-by-side comparison using a clinical AI tool—see how the same question produces dramatically different outputs.

Hands-on exercise

The Setup

You're a pediatrician. A 12-year-old boy with familial hypercholesterolemia (FH) is in your office. His LDL is 190 mg/dL despite six months of dietary modifications. His father had a myocardial infarction at age 42. You're considering statin therapy and want to check the evidence.

You open a clinical AI tool to help inform your decision.

The Key Insight

How you ask the question will dramatically shape the answer you get—and whether that answer actually helps you care for this patient.


Prompt A: The Vague Approach

Screenshot showing a vague prompt: What about statins in kids?
"What about statins in kids?"

What's Wrong With This Prompt?

Let's break down why this prompt is likely to produce unhelpful—or potentially misleading—output.

Problem 1: No clinical context

The AI has no idea who's asking or why. Are you a:

Each of these would warrant a completely different response. Without context, the AI has to guess—and it will default to a generic, encyclopedia-style overview that may not address your actual need.

Problem 2: No specific question

"What about statins in kids?" could mean dozens of things:

The AI will try to cover everything superficially rather than anything deeply. You'll get breadth without the depth you need to make a clinical decision.

Problem 3: No patient specifics

Statins in a child with heterozygous FH is a very different question than statins in a child with obesity-related dyslipidemia. Without knowing your patient's diagnosis, labs, family history, and what's already been tried, the AI can't tailor its response to your situation.

Problem 4: Increased hallucination risk

Here's a critical point from Module 2: vague prompts increase hallucination risk. When a question is ambiguous, the AI has more "degrees of freedom" in generating a response. It's more likely to:

A specific question constrains the response space. A vague question lets the AI wander.

Problem 5: No way to verify relevance

Even if the AI produces accurate general information, how will you know if it applies to your patient? You'll have to do the mental work of filtering and applying—work the AI could have done if you'd given it the information to work with.

What You're Likely to Get

With a prompt like "What about statins in kids?", expect:

You've essentially asked the AI to write a review article when what you needed was a focused consultation.

The Clinical Parallel

Imagine calling a colleague for a curbside consult and saying only: "Hey, what about statins in kids?"

They'd immediately ask: "What's the situation? Who's the patient? What have you tried? What specifically are you wondering about?" Your colleague needs context to give useful advice. So does the AI. The difference is that your colleague will ask clarifying questions. The AI will just... answer. With whatever interpretation it defaults to.


Prompt B: The Specific Approach

Screenshot showing a specific, well-structured prompt about pediatric statin therapy
"I'm a pediatrician seeing a 12-year-old boy with familial hypercholesterolemia. His LDL is 190 mg/dL despite 6 months of dietary modifications. His father had an MI at age 42. What does current evidence say about initiating statin therapy in pediatric FH patients—specifically, at what LDL threshold, which statins have pediatric safety data, and what monitoring is recommended?"

What Makes This Prompt Effective?

Element 1: Professional identity establishes context

"I'm a pediatrician" tells the AI:

Element 2: Specific patient details focus the response

Each detail constrains the response toward relevance. The AI isn't guessing what scenario you're asking about—you've told it.

Element 3: Explicit question structure

The prompt doesn't just ask "what should I do?" It breaks down the specific information needed:

This structure guides the AI to organize its response around your actual decision points rather than generating a generic overview.

Element 4: Evidence framing

"What does current evidence say" signals that you want evidence-based guidance, not opinion. This primes the AI to reference guidelines, studies, and established recommendations rather than generating general statements.

What You're Likely to Get

With this specific prompt, expect:

The response should function more like a focused literature summary or specialist consultation than an encyclopedia entry.

Reduced Hallucination Risk

The specific prompt reduces hallucination risk in several ways:

Narrower scope = less room for error. When you ask about pediatric FH specifically, the AI draws on a more focused knowledge domain. When you ask vaguely about "statins in kids," it has to synthesize across many contexts, increasing the chance of inappropriate blending.

Checkable claims. Specific questions tend to produce specific, verifiable answers. "Atorvastatin is FDA-approved for pediatric FH starting at age 10" is verifiable. "Statins can be used in children in some situations" is technically true but not useful or checkable.

Explicit uncertainty prompting. By asking "what does the evidence say," you're implicitly asking the AI to anchor its response in sources—making it more likely to acknowledge when evidence is limited rather than generating confident-sounding filler.


The Comparison at a Glance

Dimension Vague Prompt Specific Prompt
Who's asking Unknown Pediatrician
Clinical scenario None 12yo male, FH, LDL 190, diet failed, FHx of early MI
Question Unclear ("what about...?") Three specific sub-questions
Expected output Generic overview Targeted decision support
Hallucination risk Higher Lower
Clinical utility Low—requires significant filtering High—directly applicable
Verification Difficult—claims are vague Easier—claims are specific

The Takeaway

The same clinical question can produce dramatically different AI outputs depending on how you frame it. This isn't about knowing magic words or secret techniques. It's about the same principle that governs good clinical communication:

Specific input produces specific output.

You already know this from taking histories. "I don't feel good" gives you nothing. "I've had three days of progressive shortness of breath, worse when I lie flat, and I noticed my ankles are swollen" gives you a direction.

The habit to build: before submitting any prompt to a clinical AI tool, pause and ask yourself:

Those few seconds of thought consistently transform vague, marginally useful AI responses into focused, clinically applicable guidance.


Connection to Module 2: Why This Matters for Safety

Remember from Module 2 that AI hallucinations aren't random—they're more likely under certain conditions. Vague prompts create those conditions:

Specific prompts are a safety practice, not just an efficiency practice. They reduce the attack surface for hallucination by constraining what the AI needs to produce.

This doesn't eliminate the need for verification—you should still check important claims regardless of how you prompted. But specific prompts make verification easier (because claims are more concrete) and reduce the volume of content you need to verify (because the response is more focused).

Key Principles Reinforced

From Module 1: AI tools are most useful when you understand what they actually do—predict likely text given input. Better input = better predictions.

From Module 2: Hallucinations are more likely with vague, open-ended prompts. Specificity is a risk-reduction strategy.

From Module 3: The principles of good clinical communication—context, specificity, clear questions—apply directly to AI interaction. You already have these skills; now apply them to a new medium.


Your Challenge: Try It Yourself

Hands-On Comparison

  1. 1

    Run both prompts through a clinical AI tool you have access to (Open Evidence, Glass AI, or a general tool like Claude or ChatGPT).

  2. 2

    Compare the outputs. Note:

    • How much of each response is directly relevant to the clinical scenario?
    • Which response gives you actionable guidance?
    • Which response makes claims that are easier to verify?
    • Which response would you trust more—and why?
  3. 3

    Reflect on the difference. The time investment in the specific prompt was perhaps 30 seconds of additional typing. What was the return on that investment in terms of output quality?

  4. 4

    Apply it forward. The next time you use an AI tool for a clinical question, deliberately construct a specific prompt before falling back on a quick, vague one. Track whether the outputs improve.

Extra Challenge

Think of a clinical question you've had recently—something you might have searched UpToDate or consulted a colleague about. Write two versions of that question as AI prompts:

  1. A quick, vague version (how you might naturally type it)
  2. A specific, well-structured version (applying what you've learned)

Run both through an AI tool. Did the extra 30 seconds of crafting the specific prompt save you time in filtering, verifying, or re-prompting? Most clinicians find the investment pays off immediately.