FOUNDATIONS

The Art of the Ask

Prompting LLMs effectively—how to communicate with AI tools to get outputs that actually help your clinical work.

~25 min read 6 readings 4 podcasts
Core Question

How do you communicate effectively with AI tools to get outputs that are actually useful—and what skills do you already have that transfer directly?

What Does "Prompting" Actually Mean?

If you've spent any time reading about AI, you've encountered the term "prompting"—probably dozens of times, used in ways that seem contradictory or confusing. Before we go further, let's untangle this, because the word carries at least three distinct meanings in AI discussions, and conflating them creates unnecessary confusion.

Prompt as Input
Simply the text you give an AI model—the thing you type in the box
Prompt Engineering
Deliberately structuring and refining inputs to get better outputs
System Prompts
Hidden instructions that shape AI behavior before you ever interact

For this module, we're primarily concerned with the first two meanings—what you type and how to type it better. But understanding that third layer helps explain why the same question can produce different results in different AI tools, even when they use the same underlying technology.

A better way to think about it: prompting is the skill of clear communication applied to a new medium. You're not learning arcane techniques—you're learning to communicate effectively with a particular kind of interlocutor that processes language differently than humans do. The same principles that make you a clear communicator with colleagues, patients, and trainees apply here, just with some adjustments for the AI's particular characteristics.

The Clinical Parallel: You Already Know This

Here's the thing: if you've practiced medicine for any length of time, you already understand the core principles of effective prompting. You just call it something else.

Consider the patient history. Every clinician knows that a well-taken history does most of the diagnostic work. "I don't feel good" gives you almost nothing to work with. "I've had progressive shortness of breath over two weeks, worse with exertion, no chest pain, but I noticed my ankles are swelling and I've gained about five pounds even though I'm not eating more"—that gives you a direction.

The same principle applies to AI. A vague prompt produces vague output. A specific, well-structured prompt produces focused, useful output. The clinical intuition you've developed about what information matters for a given question transfers directly.

Consider how you'd present a case to a specialist:

"I'd like your input on a 45-year-old woman with three months of progressive fatigue. She has hypothyroidism on replacement, well-controlled. Recent TSH was normal. She's noticed weight gain despite no dietary changes, and her husband says she snores loudly now, which is new. She's sleepy during the day. I'm wondering about sleep apnea, but I wanted to make sure I'm not missing anything before pursuing a sleep study."

Notice what you've done: you've established the patient, the timeline, the key symptoms, the relevant background, what you've already ruled out, what you're considering, and what you actually want from the consultant. That's expert-level prompting of a human intelligence. The same structure works for artificial intelligence.

Key Insight

The difference is that the specialist would interrupt to ask clarifying questions if you left something important out. The AI usually won't—it will just produce an answer based on incomplete information. This is why being thorough upfront matters even more with AI than with human colleagues.

Why Good Input Matters

Large language models are pattern-matching engines trained on vast amounts of text. They predict what text should come next based on statistical patterns learned during training. When you give an LLM a vague prompt, you're essentially asking it to guess what you mean. When you give it a specific, well-structured prompt, you're constraining the space of reasonable responses, making it much more likely to produce something useful.

Compare these two prompts about diabetes management:

Vague:

"Tell me about diabetes treatment."

Specific:

"I'm a primary care physician seeing a 58-year-old woman with type 2 diabetes, A1c 8.2%, on metformin 1000mg BID for three years, BMI 32, eGFR 65. She has mild peripheral neuropathy. I'm considering adding a second agent. What are the main options, their benefits and risks in her specific situation, and what would guide the choice between them?"

The first prompt will generate something encyclopedic and unhelpful—a survey of diabetes treatment written for no particular audience with no particular purpose. The second will generate a clinically relevant comparison tailored to a specific patient scenario.

The Anatomy of an Effective Prompt

Think of a good prompt as having the same structure as a good case presentation. Not every prompt needs every element, but knowing the components helps you include what matters.

The "Perfect Prompt" Formula

Most failed prompts are missing one of these three essentials.

1. Context
"Patient is 85yo female with..."
Constrains the variables.
2. Task
"Summarize the risks of..."
Defines the action.
3. Format
"Use bullet points, 5th grade level."
Shapes the output.
Component Purpose Example
Context Establishes the situation and frame of reference "As a pediatric hospitalist..." or "For a patient education handout..."
Task Defines what you want—be explicit "Explain," "list," "compare," "create a differential"
Constraints Focuses the output with guardrails Length, format, what to exclude, what to assume
Examples Demonstrates what you mean Show format, style, or approach you want

Let's see how these components work together:

Context: "I'm a family physician explaining results to a patient with limited health literacy." Task: "Help me explain what a hemoglobin A1c of 7.2% means." Constraints: "Use simple language, avoid medical jargon, keep it to 2-3 paragraphs. Don't mention specific medications." Example: "Similar to how I'd explain blood pressure: 'Your blood pressure number shows how hard your heart is working to pump blood. We want it under 130 on top because higher numbers mean your heart is working too hard, which can cause problems over time.'"

This prompt gives the AI everything it needs to produce something genuinely useful on the first try.

Specificity: The Most Valuable Habit

If you take one thing from this module, let it be this: specificity costs nothing but transforms results.

Every time you're about to submit a prompt, pause and ask: what am I leaving implicit that I could make explicit? What assumptions am I hoping the AI will correctly guess?

Vague Prompt Specific Prompt
"What medications treat hypertension?" "What are first-line antihypertensive options for a 45-year-old Black woman with stage 1 hypertension and no comorbidities, based on current ACC/AHA guidelines?"
"Write discharge instructions for pneumonia." "Write discharge instructions for an elderly patient leaving the hospital after treatment for community-acquired pneumonia. 6th-grade reading level, include when to call the doctor versus return to the ER."
"Help me with a difficult conversation." "I'm about to tell a 70-year-old patient that his lung nodule is highly suspicious for cancer. He has anxiety and tends to catastrophize. Help me structure this conversation with specific phrases I can use."

Notice that the specific versions take longer to write. This is actually the point. The few seconds you spend thinking about what you really need get repaid in responses that actually help, rather than responses you have to fix, regenerate, or supplement.

Role Assignment: When It Helps (and When It Doesn't)

Early AI prompting guides emphasized "persona" heavily—telling the AI to "act as" a specialist. With modern models (GPT-4o, Claude 3.5+, Gemini 1.5), this technique has become less essential. Today's models are much better at inferring appropriate framing from the task itself. If you ask a detailed clinical question, the model responds clinically without being told to "act as a physician."

That said, role assignment still helps in specific situations:

The key insight: specificity about your task and context matters more than persona. A detailed, well-structured prompt without any role assignment will usually outperform a vague prompt that starts with "Act as an expert." Focus your effort on context, task, and format first.

Structured Prompting: Breaking Complex Tasks into Steps

When you're asking an AI to do something complex, asking for everything at once often produces worse results than breaking the task into logical steps. This is sometimes called "chain of thought" prompting, and it mirrors how clinicians actually reason through complex cases.

"A 35-year-old woman presents with fatigue, weight gain, and cold intolerance. Step 1: Generate a differential diagnosis, listing possibilities from most to least likely based on these symptoms and her demographics. Step 2: For the top three possibilities, list what additional history, physical exam findings, or initial labs would help distinguish between them. Step 3: Her TSH comes back at 12.5 mIU/L (normal 0.4-4.0). Update your assessment and explain what this means for her differential."

This step-by-step approach produces more thorough reasoning than simply asking "What's wrong with this patient?" It also makes the AI's thinking visible, which helps you evaluate whether its reasoning is sound.

The Power of Examples: Showing Rather Than Telling

If you want a specific format, style, or approach, showing an example is often more effective than describing what you want. This is sometimes called "few-shot prompting."

"I need to create patient education materials. Here's an example of the style I want: Example: 'WHAT IS A STRESS TEST? A stress test shows how well your heart works during exercise. You'll walk on a treadmill while we monitor your heart. The test helps us see if your heart is getting enough blood. Most people find it tiring but not painful. The whole appointment takes about an hour, but you'll only be exercising for about 10-15 minutes.' Now create similar materials explaining what to expect during a colonoscopy."

The AI will match the reading level, structure, and tone of your example, producing consistent materials without you having to describe every stylistic element.

Iterative Refinement: The Conversation Continues

One of the most underused features of AI assistants is their ability to have conversations. You don't have to get your prompt perfect on the first try. You can refine.

Here's a practical example of iteration in action:

1

Iteration Example

Initial prompt: "Help me explain heart failure to a patient."

AI response: [Produces technically accurate but dense explanation with medical terminology]

Follow-up 1: "That's too technical. Rewrite it assuming the patient has no medical background."

AI response: [Produces simpler explanation, but still long]

Follow-up 2: "Better. Now make it about 3 paragraphs, and use an analogy—like comparing the heart to a pump."

AI response: [Produces concise, analogy-based explanation]

Follow-up 3: "Good. Now add a sentence at the end about why taking their medications every day matters."

Final result: A patient-friendly explanation you couldn't have gotten in one try without much more elaborate initial prompting.

The key insight is that prompting isn't a single transaction. It's a dialogue. Treat it like a conversation with a knowledgeable but not omniscient colleague.

Prompt Patterns for Clinical Work

Certain prompt structures work well for common clinical tasks. Here are patterns you can adapt:

Task Type Prompt Pattern
Clinical summaries "Summarize this [case/note] for [audience]. Focus on [key elements]. Keep it to [length]."
Patient education "Explain [concept] to a patient who [characteristics]. Use language appropriate for [literacy level]. Avoid [exclusions]."
Differential diagnosis "Given [presentation], generate a differential. Rate likelihood and list supporting/opposing features. Suggest discriminating information."
Literature review "Summarize current evidence on [topic]. Note study quality and controversies. Flag where evidence is limited."
Communication assistance "Help me communicate [information] to [recipient]. Challenge: [difficulty]. Goal: [outcome]. Suggest language and prepare for responses."
Clinical decision support "I'm considering [intervention] for [patient]. Walk me through key factors. Arguments for and against? What would change your recommendation?"

These templates aren't magic formulas. They're starting points that remind you to include relevant context, specify your task, and constrain the output appropriately.

Common Pitfalls and How to Avoid Them

Kitchen Sink Problem
Too much irrelevant information confuses the AI. Be thorough about what matters, not exhaustive about everything.
Task Ambiguity
If you're not clear about what you want, the AI will guess. Use specific verbs: summarize, list, compare, explain.
Implicit Assumptions
The AI doesn't know what you know unless you tell it. Make guidelines, populations, and context explicit.
Over-reliance
AI models can be confident and wrong. Ask for alternatives, request critiques, verify independently.

Working with Uncertainty: Prompting for Nuance

One hallmark of clinical expertise is comfort with uncertainty. AI tools, by contrast, often default to confident-sounding responses even when uncertainty would be more appropriate. You can prompt for more nuanced responses:

Instead of... Try...
"What's the best treatment for this condition?" "What are the treatment options, and what factors would influence the choice? Where is evidence strong versus uncertain?"
"Is this finding significant?" "How would you interpret this finding? What are different possibilities, and what additional information would help?"
"Should I refer this patient?" "What are the considerations for and against referral? What would a reasonable clinician weigh?"

You can also explicitly request acknowledgment of limitations:

When Prompting Reaches Its Limits

Effective prompting can dramatically improve AI outputs, but it can't fix fundamental limitations. It's worth knowing when you're bumping against walls that better prompting won't move.

The Core Insight

Prompt engineering is not going to be a stand-alone job for most healthcare professionals. It's going to be a component skill—like information literacy or evidence-based practice—that makes you more effective at work that remains fundamentally human.

Personalizing Your AI: Custom Instructions

Every time you start a new conversation with an AI, it knows nothing about you. You're starting from scratch—explaining your role, your preferences, your context. But all three major AI platforms let you set custom instructions that apply to every conversation automatically. This is like giving a new colleague an orientation on their first day, so you don't have to re-explain the basics every time you work together.

Custom instructions are particularly valuable for clinicians because your context is consistent: your specialty, your patient population, how you prefer information formatted, what you typically need AI help with. Setting this up once saves repetition and produces more relevant outputs from the start.

What to Include in Custom Instructions

Think about what you'd tell a knowledgeable assistant on their first day:

Category What to Specify Example
Your Role Specialty, practice setting, patient population "I'm a pediatric hospitalist at a community hospital. Most of my patients are ages 0-18 with acute illnesses."
Tone Preferences Formal vs conversational, concise vs detailed "I prefer direct, concise responses. Skip the preamble and get to the point."
Format Preferences Bullet points, tables, prose; use of headers "Use bullet points for lists. For clinical comparisons, use tables."
Knowledge Level What to assume you already know "Assume physician-level medical knowledge. Don't explain basic concepts unless I ask."
Common Tasks What you typically need help with "I often need help drafting patient education materials, summarizing guidelines, and preparing for difficult conversations."
Guidelines & Standards Which guidelines you follow "Reference AAP guidelines for pediatric care. Use UpToDate as a standard reference."
Safety Reminders What the AI should flag or avoid "Always remind me to verify medication dosing independently. Flag anything that requires specialist consultation."

Setting Up Custom Instructions by Platform

ChatGPT

Where to find it:

  • Desktop/Web: Click your profile icon → Settings → Personalization → Custom Instructions
  • Mobile: Tap the menu → Settings → Personalization → Custom Instructions

How it works: Two text boxes—"What would you like ChatGPT to know about you?" and "How would you like ChatGPT to respond?" Each has a 1,500 character limit.

Tip: ChatGPT also has "Memory" that learns from conversations. You can review and edit what it remembers in Settings → Personalization → Memory.

Claude

Where to find it:

  • Desktop/Web: Click your profile icon → Settings → Profile → "Describe yourself to Claude"
  • Mobile: Tap menu → Settings → Profile

How it works: A single text field where you describe yourself, your work, and how you'd like Claude to respond. Claude uses this context in every new conversation.

Tip: Claude also supports "Projects" where you can set project-specific instructions and upload reference documents that persist across conversations.

Gemini

Where to find it:

  • Desktop/Web: Click Settings (gear icon) → Extensions & Data → Response preferences, or use "Saved Info" in Settings
  • Mobile: Tap your profile → Settings → Saved Info

How it works: Gemini lets you save personal details (location, work, interests) and response preferences (length, tone, complexity). It integrates with your Google account.

Tip: If you use Google Workspace, Gemini can access your Drive, Gmail, and Calendar with permission, making context even richer.

Example Custom Instructions for Clinicians

Here's a template you can adapt:

About Me: I'm a [specialty] physician practicing at a [setting: academic/community/private practice]. My patients are primarily [population]. I've been practicing for [X] years. I typically use AI help for: drafting patient education materials, summarizing clinical literature, preparing for difficult conversations, and administrative tasks like letters and documentation. How I'd Like Responses: - Be concise and direct. Skip the "Great question!" preamble. - Use bullet points for lists, tables for comparisons. - Assume physician-level medical knowledge—don't explain basics unless I ask. - When discussing treatments, reference current guidelines (specify: AAP, AHA, IDSA, etc.). - For medication recommendations, always note to verify dosing independently. - If something is outside my likely scope or requires specialist input, flag it. - When I'm drafting patient materials, default to 6th-grade reading level unless I specify otherwise. - If you're uncertain about something, say so rather than guessing.
Privacy Reminder

Custom instructions are stored by the AI provider and used to personalize your experience. Don't include PHI, specific patient details, or sensitive institutional information in your custom instructions. Keep it to your general role, preferences, and workflow patterns.

Testing and Refining Your Setup

After setting custom instructions:

  1. Start a new conversation and ask a typical question without adding context. See if the response reflects your preferences.
  2. Check the tone and format. Is it as concise as you wanted? Using the right structure?
  3. Try different task types—patient education, clinical questions, administrative help. Make sure your instructions work across your common use cases.
  4. Iterate. If something isn't working, go back and refine. "Be concise" might need to become "Limit responses to 3-4 paragraphs maximum unless I ask for more detail."

Well-configured custom instructions won't make every prompt perfect, but they'll give you a meaningful head start. Instead of explaining your context every time, you can jump straight to the specific question—and get responses that are already calibrated to your needs.

Practical Exercises

To build prompting skills, try these exercises:

  1. The Specificity Upgrade: Take a question you'd naturally ask an AI and make it more specific. Then more specific again. See how outputs differ.
  2. The Format Requirement: Ask for information in three formats: paragraph, bullet points, and table. Notice which is most useful when.
  3. The Step-by-Step Approach: Take a complex clinical question and break it into sequential steps. Compare to asking all at once.
  4. The Example Provision: When asking for content in a particular style, provide an example. Compare to asking without one.
  5. The Iteration Practice: Start with a basic prompt, then refine through 3-4 rounds of follow-up. Notice how output improves.
  6. The Cross-Check: Ask a clinical question, then follow up with "What's the strongest argument against what you just said?"
  7. The Audience Shift: Ask the AI to explain a concept to a patient, then to a medical student, then to a specialist. Notice how context shapes output better than "act as" instructions.
Try This with NotebookLM

This module's concepts are perfect for hands-on practice with NotebookLM:

  • Upload a clinical guideline and practice different prompting approaches
  • Try vague vs. specific prompts on the same content and compare results
  • Use role assignment to get the same information framed for different audiences
  • Practice iterative refinement to improve initial outputs

The document-grounded nature of NotebookLM makes it a safe sandbox to build prompting intuition before applying these skills to general-purpose AI tools.

Readings

Prioritize This!
Offcall · The practical cheat sheet. Mnemonic-based frameworks (like "RTFT": Role, Task, Format, Tone) that busy providers can mentally reference for immediate results during administrative tasks or clinical support.
JMIR Medical Education · The academic standard. Defines specific techniques like "zero-shot," "chain-of-thought," and "curriculum prompting" for clinical scenarios.
Thomas F. Heston, MD (WSU) · A comprehensive open-access textbook treating prompting as a pedagogical tool—showing how "teaching" an AI to solve a case helps students master the pathology themselves.
Anthropic · The best technical guide on system prompts and structuring complex asks.
PMC · Explains "Zero-shot" vs "Few-shot" learning with clinical examples.
Frontiers in Medicine · How to create patient personas for practicing difficult conversations.
New England Journal of Medicine · The ongoing source of truth. Subscribe for the long term to separate hype from evidence-based AI utility. Frequent "For Clinicians" pieces evaluate tools and strategies.

Podcasts & Video

Abdulhameed Dere, MD · A rare guide bridging engineering concepts with clinical needs.
Microsoft Research · Peter Lee & UCSF/UCSD Health leaders on what's actually working.
Stanford Medicine · Jonathan Chen, MD on using AI to support (not replace) clinical judgment.

Books

The AI Revolution in Medicine: GPT-4 and Beyond
Lee P, Goldberg C, Kohane I · Pearson 2023 · How LLMs will transform medical practice
Deep Medicine: How AI Can Make Healthcare Human Again
Topol E · Basic Books 2019 · AI as augmentation, restoring humanism to medicine

Reflection Questions

  1. Think of a recent clinical question you had. How would you prompt an AI to help with it? What context, constraints, and specificity would you include?
  2. How does the structure of a good case presentation parallel the structure of a good prompt? What elements transfer directly?
  3. When has vague communication led to problems in your clinical work? How might those same issues appear when communicating with AI?
  4. What clinical tasks in your workflow might benefit most from well-crafted prompts? What makes those tasks good candidates?

Learning Objectives

  • Explain the three meanings of "prompting" and how they differ
  • Apply clinical communication skills (specificity, context, structure) to AI interactions
  • Construct effective prompts using context, task, constraints, and examples
  • Use role assignment and chain-of-thought techniques to improve AI outputs
  • Practice iterative refinement to get useful results without perfect initial prompts
  • Recognize when prompting limitations require verification or alternative approaches