The Prompt Matters
A side-by-side comparison using a clinical AI tool—see how the same question produces dramatically different outputs.
The Setup
You're a pediatrician. A 12-year-old boy with familial hypercholesterolemia (FH) is in your office. His LDL is 190 mg/dL despite six months of dietary modifications. His father had a myocardial infarction at age 42. You're considering statin therapy and want to check the evidence.
You open a clinical AI tool to help inform your decision.
How you ask the question will dramatically shape the answer you get—and whether that answer actually helps you care for this patient.
Prompt A: The Vague Approach
What's Wrong With This Prompt?
Let's break down why this prompt is likely to produce unhelpful—or potentially misleading—output.
Problem 1: No clinical context
The AI has no idea who's asking or why. Are you a:
- Pediatrician considering treatment for a specific patient?
- Medical student studying pharmacology?
- Parent who read something online?
- Researcher reviewing the literature?
Each of these would warrant a completely different response. Without context, the AI has to guess—and it will default to a generic, encyclopedia-style overview that may not address your actual need.
Problem 2: No specific question
"What about statins in kids?" could mean dozens of things:
- Are they safe?
- Are they effective?
- Which ones are approved?
- At what age can they be started?
- What are the indications?
- What are the side effects?
- How do you monitor?
- What do guidelines say?
The AI will try to cover everything superficially rather than anything deeply. You'll get breadth without the depth you need to make a clinical decision.
Problem 3: No patient specifics
Statins in a child with heterozygous FH is a very different question than statins in a child with obesity-related dyslipidemia. Without knowing your patient's diagnosis, labs, family history, and what's already been tried, the AI can't tailor its response to your situation.
Problem 4: Increased hallucination risk
Here's a critical point from Module 2: vague prompts increase hallucination risk. When a question is ambiguous, the AI has more "degrees of freedom" in generating a response. It's more likely to:
- Fill gaps with plausible-sounding but potentially inaccurate information
- Blend information from different contexts inappropriately
- Generate confident-sounding statements about things it's uncertain about
A specific question constrains the response space. A vague question lets the AI wander.
Problem 5: No way to verify relevance
Even if the AI produces accurate general information, how will you know if it applies to your patient? You'll have to do the mental work of filtering and applying—work the AI could have done if you'd given it the information to work with.
What You're Likely to Get
With a prompt like "What about statins in kids?", expect:
- A general overview of pediatric statin use
- Broad statements about safety and efficacy
- Possibly outdated or non-specific guideline references
- No guidance on YOUR patient's specific situation
- No clear decision support
- Information you'll need to significantly filter and supplement
You've essentially asked the AI to write a review article when what you needed was a focused consultation.
Imagine calling a colleague for a curbside consult and saying only: "Hey, what about statins in kids?"
They'd immediately ask: "What's the situation? Who's the patient? What have you tried? What specifically are you wondering about?" Your colleague needs context to give useful advice. So does the AI. The difference is that your colleague will ask clarifying questions. The AI will just... answer. With whatever interpretation it defaults to.
Prompt B: The Specific Approach
What Makes This Prompt Effective?
Element 1: Professional identity establishes context
"I'm a pediatrician" tells the AI:
- Use clinical language appropriate for a physician
- Assume baseline medical knowledge
- Frame the response as peer-to-peer clinical guidance
- Focus on practical decision-making, not patient education
Element 2: Specific patient details focus the response
- "12-year-old boy"—age and sex matter for pediatric prescribing
- "familial hypercholesterolemia"—this is the specific indication, not generic dyslipidemia
- "LDL is 190 mg/dL"—quantifies the severity
- "despite 6 months of dietary modifications"—establishes that first-line therapy has been tried
- "father had an MI at age 42"—provides family history that influences risk assessment
Each detail constrains the response toward relevance. The AI isn't guessing what scenario you're asking about—you've told it.
Element 3: Explicit question structure
The prompt doesn't just ask "what should I do?" It breaks down the specific information needed:
- LDL threshold for treatment initiation
- Which statins have pediatric safety data
- Monitoring recommendations
This structure guides the AI to organize its response around your actual decision points rather than generating a generic overview.
Element 4: Evidence framing
"What does current evidence say" signals that you want evidence-based guidance, not opinion. This primes the AI to reference guidelines, studies, and established recommendations rather than generating general statements.
What You're Likely to Get
With this specific prompt, expect:
- Targeted information about statin therapy specifically in pediatric FH
- Reference to relevant guidelines (AAP, AHA, NLA recommendations for FH)
- Specific LDL thresholds for treatment initiation in this population
- Information about which statins (likely atorvastatin, rosuvastatin, pravastatin) have pediatric indication or safety data
- Monitoring parameters (baseline and follow-up LFTs, CK if symptomatic, lipid panels)
- Contextually relevant information you can actually apply to your patient
The response should function more like a focused literature summary or specialist consultation than an encyclopedia entry.
Reduced Hallucination Risk
The specific prompt reduces hallucination risk in several ways:
Narrower scope = less room for error. When you ask about pediatric FH specifically, the AI draws on a more focused knowledge domain. When you ask vaguely about "statins in kids," it has to synthesize across many contexts, increasing the chance of inappropriate blending.
Checkable claims. Specific questions tend to produce specific, verifiable answers. "Atorvastatin is FDA-approved for pediatric FH starting at age 10" is verifiable. "Statins can be used in children in some situations" is technically true but not useful or checkable.
Explicit uncertainty prompting. By asking "what does the evidence say," you're implicitly asking the AI to anchor its response in sources—making it more likely to acknowledge when evidence is limited rather than generating confident-sounding filler.
The Comparison at a Glance
| Dimension | Vague Prompt | Specific Prompt |
|---|---|---|
| Who's asking | Unknown | Pediatrician |
| Clinical scenario | None | 12yo male, FH, LDL 190, diet failed, FHx of early MI |
| Question | Unclear ("what about...?") | Three specific sub-questions |
| Expected output | Generic overview | Targeted decision support |
| Hallucination risk | Higher | Lower |
| Clinical utility | Low—requires significant filtering | High—directly applicable |
| Verification | Difficult—claims are vague | Easier—claims are specific |
The Takeaway
The same clinical question can produce dramatically different AI outputs depending on how you frame it. This isn't about knowing magic words or secret techniques. It's about the same principle that governs good clinical communication:
You already know this from taking histories. "I don't feel good" gives you nothing. "I've had three days of progressive shortness of breath, worse when I lie flat, and I noticed my ankles are swollen" gives you a direction.
The habit to build: before submitting any prompt to a clinical AI tool, pause and ask yourself:
- Have I told it who I am and what context I'm working in?
- Have I provided the relevant patient details?
- Have I specified exactly what I need to know?
- Is my question focused enough that I'll be able to verify the answer?
Those few seconds of thought consistently transform vague, marginally useful AI responses into focused, clinically applicable guidance.
Connection to Module 2: Why This Matters for Safety
Remember from Module 2 that AI hallucinations aren't random—they're more likely under certain conditions. Vague prompts create those conditions:
- Ambiguity forces the AI to guess your intent, and guesses can be wrong
- Broad scope requires synthesis across contexts, increasing error risk
- Lack of constraints gives the AI more freedom to generate plausible-sounding but inaccurate content
Specific prompts are a safety practice, not just an efficiency practice. They reduce the attack surface for hallucination by constraining what the AI needs to produce.
This doesn't eliminate the need for verification—you should still check important claims regardless of how you prompted. But specific prompts make verification easier (because claims are more concrete) and reduce the volume of content you need to verify (because the response is more focused).
From Module 1: AI tools are most useful when you understand what they actually do—predict likely text given input. Better input = better predictions.
From Module 2: Hallucinations are more likely with vague, open-ended prompts. Specificity is a risk-reduction strategy.
From Module 3: The principles of good clinical communication—context, specificity, clear questions—apply directly to AI interaction. You already have these skills; now apply them to a new medium.
Your Challenge: Try It Yourself
Hands-On Comparison
-
1
Run both prompts through a clinical AI tool you have access to (Open Evidence, Glass AI, or a general tool like Claude or ChatGPT).
-
2
Compare the outputs. Note:
- How much of each response is directly relevant to the clinical scenario?
- Which response gives you actionable guidance?
- Which response makes claims that are easier to verify?
- Which response would you trust more—and why?
-
3
Reflect on the difference. The time investment in the specific prompt was perhaps 30 seconds of additional typing. What was the return on that investment in terms of output quality?
-
4
Apply it forward. The next time you use an AI tool for a clinical question, deliberately construct a specific prompt before falling back on a quick, vague one. Track whether the outputs improve.
Think of a clinical question you've had recently—something you might have searched UpToDate or consulted a colleague about. Write two versions of that question as AI prompts:
- A quick, vague version (how you might naturally type it)
- A specific, well-structured version (applying what you've learned)
Run both through an AI tool. Did the extra 30 seconds of crafting the specific prompt save you time in filtering, verifying, or re-prompting? Most clinicians find the investment pays off immediately.