The Art of the Ask
Prompting LLMs effectively—how to communicate with AI tools to get outputs that actually help your clinical work.
How do you communicate effectively with AI tools to get outputs that are actually useful—and what skills do you already have that transfer directly?
What Does "Prompting" Actually Mean?
If you've spent any time reading about AI, you've encountered the term "prompting"—probably dozens of times, used in ways that seem contradictory or confusing. Before we go further, let's untangle this, because the word carries at least three distinct meanings in AI discussions, and conflating them creates unnecessary confusion.
For this module, we're primarily concerned with the first two meanings—what you type and how to type it better. But understanding that third layer helps explain why the same question can produce different results in different AI tools, even when they use the same underlying technology.
A better way to think about it: prompting is the skill of clear communication applied to a new medium. You're not learning arcane techniques—you're learning to communicate effectively with a particular kind of interlocutor that processes language differently than humans do. The same principles that make you a clear communicator with colleagues, patients, and trainees apply here, just with some adjustments for the AI's particular characteristics.
The Clinical Parallel: You Already Know This
Here's the thing: if you've practiced medicine for any length of time, you already understand the core principles of effective prompting. You just call it something else.
Consider the patient history. Every clinician knows that a well-taken history does most of the diagnostic work. "I don't feel good" gives you almost nothing to work with. "I've had progressive shortness of breath over two weeks, worse with exertion, no chest pain, but I noticed my ankles are swelling and I've gained about five pounds even though I'm not eating more"—that gives you a direction.
The same principle applies to AI. A vague prompt produces vague output. A specific, well-structured prompt produces focused, useful output. The clinical intuition you've developed about what information matters for a given question transfers directly.
Consider how you'd present a case to a specialist:
Notice what you've done: you've established the patient, the timeline, the key symptoms, the relevant background, what you've already ruled out, what you're considering, and what you actually want from the consultant. That's expert-level prompting of a human intelligence. The same structure works for artificial intelligence.
The difference is that the specialist would interrupt to ask clarifying questions if you left something important out. The AI usually won't—it will just produce an answer based on incomplete information. This is why being thorough upfront matters even more with AI than with human colleagues.
Why Good Input Matters
Large language models are pattern-matching engines trained on vast amounts of text. They predict what text should come next based on statistical patterns learned during training. When you give an LLM a vague prompt, you're essentially asking it to guess what you mean. When you give it a specific, well-structured prompt, you're constraining the space of reasonable responses, making it much more likely to produce something useful.
Compare these two prompts about diabetes management:
Vague:
Specific:
The first prompt will generate something encyclopedic and unhelpful—a survey of diabetes treatment written for no particular audience with no particular purpose. The second will generate a clinically relevant comparison tailored to a specific patient scenario.
The Anatomy of an Effective Prompt
Think of a good prompt as having the same structure as a good case presentation. Not every prompt needs every element, but knowing the components helps you include what matters.
Most failed prompts are missing one of these three essentials.
| Component | Purpose | Example |
|---|---|---|
| Context | Establishes the situation and frame of reference | "As a pediatric hospitalist..." or "For a patient education handout..." |
| Task | Defines what you want—be explicit | "Explain," "list," "compare," "create a differential" |
| Constraints | Focuses the output with guardrails | Length, format, what to exclude, what to assume |
| Examples | Demonstrates what you mean | Show format, style, or approach you want |
Let's see how these components work together:
This prompt gives the AI everything it needs to produce something genuinely useful on the first try.
Specificity: The Most Valuable Habit
If you take one thing from this module, let it be this: specificity costs nothing but transforms results.
Every time you're about to submit a prompt, pause and ask: what am I leaving implicit that I could make explicit? What assumptions am I hoping the AI will correctly guess?
| Vague Prompt | Specific Prompt |
|---|---|
| "What medications treat hypertension?" | "What are first-line antihypertensive options for a 45-year-old Black woman with stage 1 hypertension and no comorbidities, based on current ACC/AHA guidelines?" |
| "Write discharge instructions for pneumonia." | "Write discharge instructions for an elderly patient leaving the hospital after treatment for community-acquired pneumonia. 6th-grade reading level, include when to call the doctor versus return to the ER." |
| "Help me with a difficult conversation." | "I'm about to tell a 70-year-old patient that his lung nodule is highly suspicious for cancer. He has anxiety and tends to catastrophize. Help me structure this conversation with specific phrases I can use." |
Notice that the specific versions take longer to write. This is actually the point. The few seconds you spend thinking about what you really need get repaid in responses that actually help, rather than responses you have to fix, regenerate, or supplement.
Role Assignment: When It Helps (and When It Doesn't)
Early AI prompting guides emphasized "persona" heavily—telling the AI to "act as" a specialist. With modern models (GPT-4o, Claude 3.5+, Gemini 1.5), this technique has become less essential. Today's models are much better at inferring appropriate framing from the task itself. If you ask a detailed clinical question, the model responds clinically without being told to "act as a physician."
That said, role assignment still helps in specific situations:
- Adjusting communication style: "Explain this to a patient with an 8th-grade reading level" or "Write this for a medical student" shapes tone and depth.
- Simulation and practice: "Act as a patient with poorly controlled diabetes who is resistant to starting insulin" creates useful role-play scenarios for training.
- Perspective-taking: "What would a clinical pharmacist flag about this regimen?" can surface considerations you might miss.
The key insight: specificity about your task and context matters more than persona. A detailed, well-structured prompt without any role assignment will usually outperform a vague prompt that starts with "Act as an expert." Focus your effort on context, task, and format first.
Structured Prompting: Breaking Complex Tasks into Steps
When you're asking an AI to do something complex, asking for everything at once often produces worse results than breaking the task into logical steps. This is sometimes called "chain of thought" prompting, and it mirrors how clinicians actually reason through complex cases.
This step-by-step approach produces more thorough reasoning than simply asking "What's wrong with this patient?" It also makes the AI's thinking visible, which helps you evaluate whether its reasoning is sound.
The Power of Examples: Showing Rather Than Telling
If you want a specific format, style, or approach, showing an example is often more effective than describing what you want. This is sometimes called "few-shot prompting."
The AI will match the reading level, structure, and tone of your example, producing consistent materials without you having to describe every stylistic element.
Iterative Refinement: The Conversation Continues
One of the most underused features of AI assistants is their ability to have conversations. You don't have to get your prompt perfect on the first try. You can refine.
- If the AI produces something too technical: "That's too complex. Simplify it for a patient with an 8th-grade reading level."
- If it's too long: "Good content, but condense to half the length."
- If it missed your point: "I was asking specifically about dosing considerations for patients with renal impairment."
- If you want it to reconsider: "What would be the argument against your recommendation?"
Here's a practical example of iteration in action:
Iteration Example
Initial prompt: "Help me explain heart failure to a patient."
AI response: [Produces technically accurate but dense explanation with medical terminology]
Follow-up 1: "That's too technical. Rewrite it assuming the patient has no medical background."
AI response: [Produces simpler explanation, but still long]
Follow-up 2: "Better. Now make it about 3 paragraphs, and use an analogy—like comparing the heart to a pump."
AI response: [Produces concise, analogy-based explanation]
Follow-up 3: "Good. Now add a sentence at the end about why taking their medications every day matters."
Final result: A patient-friendly explanation you couldn't have gotten in one try without much more elaborate initial prompting.
The key insight is that prompting isn't a single transaction. It's a dialogue. Treat it like a conversation with a knowledgeable but not omniscient colleague.
Prompt Patterns for Clinical Work
Certain prompt structures work well for common clinical tasks. Here are patterns you can adapt:
| Task Type | Prompt Pattern |
|---|---|
| Clinical summaries | "Summarize this [case/note] for [audience]. Focus on [key elements]. Keep it to [length]." |
| Patient education | "Explain [concept] to a patient who [characteristics]. Use language appropriate for [literacy level]. Avoid [exclusions]." |
| Differential diagnosis | "Given [presentation], generate a differential. Rate likelihood and list supporting/opposing features. Suggest discriminating information." |
| Literature review | "Summarize current evidence on [topic]. Note study quality and controversies. Flag where evidence is limited." |
| Communication assistance | "Help me communicate [information] to [recipient]. Challenge: [difficulty]. Goal: [outcome]. Suggest language and prepare for responses." |
| Clinical decision support | "I'm considering [intervention] for [patient]. Walk me through key factors. Arguments for and against? What would change your recommendation?" |
These templates aren't magic formulas. They're starting points that remind you to include relevant context, specify your task, and constrain the output appropriately.
Common Pitfalls and How to Avoid Them
Working with Uncertainty: Prompting for Nuance
One hallmark of clinical expertise is comfort with uncertainty. AI tools, by contrast, often default to confident-sounding responses even when uncertainty would be more appropriate. You can prompt for more nuanced responses:
| Instead of... | Try... |
|---|---|
| "What's the best treatment for this condition?" | "What are the treatment options, and what factors would influence the choice? Where is evidence strong versus uncertain?" |
| "Is this finding significant?" | "How would you interpret this finding? What are different possibilities, and what additional information would help?" |
| "Should I refer this patient?" | "What are the considerations for and against referral? What would a reasonable clinician weigh?" |
You can also explicitly request acknowledgment of limitations:
- "After answering, note any important caveats or situations where this guidance might not apply."
- "If the evidence is limited or conflicting, say so explicitly rather than presenting a definitive answer."
- "What would you want to know that you don't have information about in order to answer this better?"
When Prompting Reaches Its Limits
Effective prompting can dramatically improve AI outputs, but it can't fix fundamental limitations. It's worth knowing when you're bumping against walls that better prompting won't move.
- Knowledge cutoffs: Most AI models were trained on data up to a certain date and don't know about subsequent events, guidelines, or research.
- Hallucination: AI models can generate plausible-sounding but false information. No prompt can guarantee accuracy. Verification remains essential.
- Proprietary information: The AI doesn't have access to your hospital's protocols, your patient's chart, or your organization's policies unless you provide it.
- Tasks requiring real-world interaction: The AI can help you plan a conversation, but it can't have it. It can suggest what to look for on exam, but it can't perform it.
- Logical reasoning edge cases: AI models can still fail at edge cases, multi-step logic, or situations requiring common sense.
Prompt engineering is not going to be a stand-alone job for most healthcare professionals. It's going to be a component skill—like information literacy or evidence-based practice—that makes you more effective at work that remains fundamentally human.
Personalizing Your AI: Custom Instructions
Every time you start a new conversation with an AI, it knows nothing about you. You're starting from scratch—explaining your role, your preferences, your context. But all three major AI platforms let you set custom instructions that apply to every conversation automatically. This is like giving a new colleague an orientation on their first day, so you don't have to re-explain the basics every time you work together.
Custom instructions are particularly valuable for clinicians because your context is consistent: your specialty, your patient population, how you prefer information formatted, what you typically need AI help with. Setting this up once saves repetition and produces more relevant outputs from the start.
What to Include in Custom Instructions
Think about what you'd tell a knowledgeable assistant on their first day:
| Category | What to Specify | Example |
|---|---|---|
| Your Role | Specialty, practice setting, patient population | "I'm a pediatric hospitalist at a community hospital. Most of my patients are ages 0-18 with acute illnesses." |
| Tone Preferences | Formal vs conversational, concise vs detailed | "I prefer direct, concise responses. Skip the preamble and get to the point." |
| Format Preferences | Bullet points, tables, prose; use of headers | "Use bullet points for lists. For clinical comparisons, use tables." |
| Knowledge Level | What to assume you already know | "Assume physician-level medical knowledge. Don't explain basic concepts unless I ask." |
| Common Tasks | What you typically need help with | "I often need help drafting patient education materials, summarizing guidelines, and preparing for difficult conversations." |
| Guidelines & Standards | Which guidelines you follow | "Reference AAP guidelines for pediatric care. Use UpToDate as a standard reference." |
| Safety Reminders | What the AI should flag or avoid | "Always remind me to verify medication dosing independently. Flag anything that requires specialist consultation." |
Setting Up Custom Instructions by Platform
ChatGPT
Where to find it:
- Desktop/Web: Click your profile icon → Settings → Personalization → Custom Instructions
- Mobile: Tap the menu → Settings → Personalization → Custom Instructions
How it works: Two text boxes—"What would you like ChatGPT to know about you?" and "How would you like ChatGPT to respond?" Each has a 1,500 character limit.
Tip: ChatGPT also has "Memory" that learns from conversations. You can review and edit what it remembers in Settings → Personalization → Memory.
Claude
Where to find it:
- Desktop/Web: Click your profile icon → Settings → Profile → "Describe yourself to Claude"
- Mobile: Tap menu → Settings → Profile
How it works: A single text field where you describe yourself, your work, and how you'd like Claude to respond. Claude uses this context in every new conversation.
Tip: Claude also supports "Projects" where you can set project-specific instructions and upload reference documents that persist across conversations.
Gemini
Where to find it:
- Desktop/Web: Click Settings (gear icon) → Extensions & Data → Response preferences, or use "Saved Info" in Settings
- Mobile: Tap your profile → Settings → Saved Info
How it works: Gemini lets you save personal details (location, work, interests) and response preferences (length, tone, complexity). It integrates with your Google account.
Tip: If you use Google Workspace, Gemini can access your Drive, Gmail, and Calendar with permission, making context even richer.
Example Custom Instructions for Clinicians
Here's a template you can adapt:
Custom instructions are stored by the AI provider and used to personalize your experience. Don't include PHI, specific patient details, or sensitive institutional information in your custom instructions. Keep it to your general role, preferences, and workflow patterns.
Testing and Refining Your Setup
After setting custom instructions:
- Start a new conversation and ask a typical question without adding context. See if the response reflects your preferences.
- Check the tone and format. Is it as concise as you wanted? Using the right structure?
- Try different task types—patient education, clinical questions, administrative help. Make sure your instructions work across your common use cases.
- Iterate. If something isn't working, go back and refine. "Be concise" might need to become "Limit responses to 3-4 paragraphs maximum unless I ask for more detail."
Well-configured custom instructions won't make every prompt perfect, but they'll give you a meaningful head start. Instead of explaining your context every time, you can jump straight to the specific question—and get responses that are already calibrated to your needs.
Practical Exercises
To build prompting skills, try these exercises:
- The Specificity Upgrade: Take a question you'd naturally ask an AI and make it more specific. Then more specific again. See how outputs differ.
- The Format Requirement: Ask for information in three formats: paragraph, bullet points, and table. Notice which is most useful when.
- The Step-by-Step Approach: Take a complex clinical question and break it into sequential steps. Compare to asking all at once.
- The Example Provision: When asking for content in a particular style, provide an example. Compare to asking without one.
- The Iteration Practice: Start with a basic prompt, then refine through 3-4 rounds of follow-up. Notice how output improves.
- The Cross-Check: Ask a clinical question, then follow up with "What's the strongest argument against what you just said?"
- The Audience Shift: Ask the AI to explain a concept to a patient, then to a medical student, then to a specialist. Notice how context shapes output better than "act as" instructions.
See exactly how vague vs. specific prompts produce dramatically different outputs using a real clinical scenario about pediatric statin therapy.
This module's concepts are perfect for hands-on practice with NotebookLM:
- Upload a clinical guideline and practice different prompting approaches
- Try vague vs. specific prompts on the same content and compare results
- Use role assignment to get the same information framed for different audiences
- Practice iterative refinement to improve initial outputs
The document-grounded nature of NotebookLM makes it a safe sandbox to build prompting intuition before applying these skills to general-purpose AI tools.
Readings
Podcasts & Video
Books
Reflection Questions
- Think of a recent clinical question you had. How would you prompt an AI to help with it? What context, constraints, and specificity would you include?
- How does the structure of a good case presentation parallel the structure of a good prompt? What elements transfer directly?
- When has vague communication led to problems in your clinical work? How might those same issues appear when communicating with AI?
- What clinical tasks in your workflow might benefit most from well-crafted prompts? What makes those tasks good candidates?
Learning Objectives
- Explain the three meanings of "prompting" and how they differ
- Apply clinical communication skills (specificity, context, structure) to AI interactions
- Construct effective prompts using context, task, constraints, and examples
- Use role assignment and chain-of-thought techniques to improve AI outputs
- Practice iterative refinement to get useful results without perfect initial prompts
- Recognize when prompting limitations require verification or alternative approaches