USING AI

Clinical Decision Support Tools in Daily Practice

From UpToDate to OpenEvidence: how to access, evaluate, and integrate AI-powered clinical decision support into your workflow.

~30 min read Practical guide
Core Question

How can AI-powered clinical decision support tools augment your clinical reasoning without replacing the judgment that makes you a physician?

Introduction: From Textbooks to the Exam Room

Every clinician knows this moment: a patient presents with an unusual constellation of findings, and you need answers. Not tomorrow, not after a literature review—now, while they're sitting in front of you waiting for your expertise.

For decades, the options were limited. Flip through a reference text. Call a colleague. Rely on memory. The knowledge existed somewhere, but accessing it at the point of care meant either slowing down or proceeding with uncertainty.

Clinical Decision Support (CDS) tools emerged to solve exactly this problem. They represent healthcare's attempt to put the right information in the right hands at the right moment. What started as simple drug interaction alerts in the 1970s has evolved into sophisticated AI-powered systems that can synthesize millions of peer-reviewed papers into actionable guidance.

Building on Earlier Topics

This topic builds on everything we've covered so far. You understand how language models work, you know how to prompt them effectively, and you've explored the major platforms. You also understand the privacy considerations around PHI.

Now we're putting it all together—learning how to use CDS tools to practice better medicine. The goal isn't to replace your clinical judgment. It's to augment it with the best available evidence, delivered when and where you need it.

Prioritize This!

If you only do one thing after reading this topic: sign up for OpenEvidence.

If you have an NPI number, registration takes 2 minutes and gives you free access to the platform used by 40% of U.S. physicians. It's the fastest way to experience what AI-powered clinical decision support actually feels like in practice.

Medical students: OpenEvidence now accepts student registration. You'll need to upload proof of enrollment (student ID, transcript, or enrollment letter). If that's not available yet, Glass Health offers free accounts without NPI verification—start there.


Part 1: A Brief History of Clinical Decision Support

Understanding where CDS came from helps explain both its current capabilities and its limitations. The history is one of ambition, partial success, and continuous reinvention.

The Expert System Era (1970s–1990s)

The roots of clinical decision support trace to the earliest days of medical computing. In 1959, researchers Ledley and Lusted published a landmark paper, "Reasoning Foundations of Medical Diagnosis," proposing that computers could assist with diagnostic reasoning. It took another decade for this vision to become reality.

MYCIN, developed at Stanford University in the early 1970s, became the most famous of the early medical expert systems. Created as Edward Shortliffe's doctoral work, MYCIN was designed to diagnose bacterial infections and recommend antibiotic therapy. It worked through "backward chaining"—starting with a hypothesis (suspected infection), then systematically gathering evidence to prove or disprove it through a series of yes/no questions.

MYCIN contained roughly 600 rules, each encoding a piece of expert knowledge in if-then format: "If the organism stains gram-negative, and the morphology is rod-shaped, and the patient has been hospitalized, then there is suggestive evidence that the organism is Pseudomonas aeruginosa." The system could explain its reasoning at each step—a feature that increased physician trust and that modern AI developers are still trying to replicate with "explainable AI."

In rigorous evaluations, MYCIN performed comparably to infectious disease specialists in diagnosing difficult infections like meningitis and selecting appropriate antibiotics. It outperformed general practitioners. Yet it was never deployed in real clinical practice. Doctors remained reluctant to trust computer reasoning. Liability concerns proved intractable. And the knowledge base was brittle—it couldn't handle situations outside its narrow domain.

Around the same time, researchers at the University of Pittsburgh developed INTERNIST-1, a more ambitious system covering all of internal medicine rather than just infectious disease. Built on years of interviews with legendary diagnostician Dr. Jack Myers, INTERNIST-1 could evaluate hundreds of diseases. It evolved into CADUCEUS, which by the mid-1980s could diagnose nearly 1,000 conditions and was described as the "most knowledge-intensive expert system in existence."

But INTERNIST-1 revealed the fundamental limitations of rule-based reasoning. Real patients have comorbidities. Real data is incomplete. Real medicine involves messy ambiguity that neat hierarchies of if-then rules couldn't capture. The system struggled with complex cases—exactly the situations where help was most needed.

The Knowledge Acquisition Bottleneck

The 1980s brought a proliferation of expert systems across medical specialties. CASNET diagnosed glaucoma. DXplain generated differential diagnoses. Monitoring systems were developed for ICUs. But all of these systems hit the same wall: the knowledge acquisition bottleneck.

Converting human expertise into computer-readable rules proved enormously difficult. Physicians couldn't always articulate their tacit knowledge. Medical science kept advancing, making rule bases obsolete. Maintaining these systems required constant effort from both medical experts and computer scientists—effort that couldn't scale.

The late 1980s marked the beginning of what historians call the "AI winter." Funding dried up. Enthusiasm faded. Expert systems that had seemed revolutionary began gathering dust.

The Evidence-Based Medicine Revolution (1990s–2000s)

While AI struggled, a parallel movement transformed how physicians accessed knowledge. Evidence-based medicine (EBM) emerged as a formal discipline, emphasizing that clinical decisions should rest on the best available research rather than tradition or authority.

The challenge was practical: how could busy clinicians access relevant evidence at the point of care? Medical knowledge was doubling every few years. Keeping current with the literature was impossible. Reading a 50-page systematic review between patients wasn't realistic.

The solution came from a Harvard-trained internist named Dr. Burton "Bud" Rose, who in 1992 founded what would become the most influential clinical reference tool of the next three decades: UpToDate. The concept was simple but powerful: synthesize the medical literature into practical, continuously updated recommendations that clinicians could access quickly.

UpToDate wasn't AI in the technical sense. It relied on human physician-editors reading the literature and distilling it into actionable guidance. But it solved the same problem that expert systems had tried to solve—getting the right information to clinicians when they needed it—through a different approach.

The timing was perfect. The internet made distribution feasible. Personal computers arrived in clinics and hospitals. By the mid-2000s, UpToDate had become ubiquitous in American medical training. Today, clinicians turn to it approximately 1.6 million times daily. Research suggests that about a third of these lookups change clinical practice.

The Modern Era: AI Returns (2020s)

Two developments converged to bring AI back to clinical decision support.

First, machine learning techniques matured dramatically. Neural networks could now learn patterns from vast datasets rather than requiring rules to be hand-coded. Systems could improve with experience rather than requiring manual updates.

Second, large language models emerged. When ChatGPT launched in November 2022, physicians immediately began experimenting. Within months, studies showed that LLMs could pass medical licensing exams and generate plausible clinical plans. The potential for AI-powered CDS was suddenly obvious.

The current generation of CDS tools combines the best of both traditions. They incorporate curated medical knowledge bases (the UpToDate approach) with sophisticated AI that can understand natural language queries and synthesize information across sources (the expert system vision, finally realized through modern techniques).

OpenEvidence, Glass Health, and a new wave of competitors now offer what MYCIN's creators could only dream of: AI assistants that can engage in genuine clinical reasoning, draw from millions of peer-reviewed papers, and provide answers in seconds.

We're in the early innings of this transformation. The tools are imperfect, the appropriate use cases are still being defined, and the integration with clinical workflows remains clunky. But the trajectory is clear: AI-powered clinical decision support is becoming part of the standard of care.


Part 2: The Current Landscape of CDS Tools

Today's clinicians can choose from three broad categories of clinical decision support: traditional knowledge bases, AI-powered clinical assistants, and general-purpose AI models applied to medicine.

Traditional Knowledge Bases

UpToDate

Remains the gold standard for evidence synthesis. Physician-editors continually review the literature and translate it into practical recommendations graded by evidence quality. The content covers over 25 medical specialties with more than 12,000 topics. In 2025, UpToDate introduced "Expert AI"—a conversational interface that layers generative AI on top of its curated content, allowing clinicians to ask questions and receive answers grounded in UpToDate's recommendations.

DynaMed (now DynaMedex)

Offers a similar approach with some philosophical differences. It emphasizes systematic evidence grading and updates content multiple times daily rather than waiting for scheduled reviews. Some studies suggest DynaMed provides stronger formal grading of recommendation strength. For clinicians who want maximum transparency about evidence levels, it's worth considering.

Clinical Key

From Elsevier, combines textbook content with procedure videos, drug information, and clinical trial results. It's particularly strong for procedural specialties.

The strength of traditional knowledge bases is quality control. Human experts curate the content. Recommendations are vetted. Sources are clear. The limitation is flexibility—you can search for topics, but you can't have a conversation. And synthesis across topics requires jumping between articles and doing the integration yourself.

AI-Powered Clinical Assistants

OpenEvidence

Has emerged as the fastest-growing clinical AI application in history. As of late 2025, it's used by more than 40% of U.S. physicians, with over 65,000 new verified clinicians registering monthly. The platform draws from more than 35 million peer-reviewed publications and has content partnerships with the New England Journal of Medicine, JAMA, and NCCN.

OpenEvidence made headlines in 2025 when it became the first AI to score a perfect 100% on the USMLE. More practically, clinicians appreciate its ability to provide sourced, cited answers to point-of-care questions—essentially a "curbside consult" with instant access to the literature.

The platform is free for physicians with an NPI number and has recently expanded access to medical students.

Glass Health

Takes a slightly different approach, focusing specifically on differential diagnosis and clinical plan generation. You enter a patient summary—age, sex, history, presenting symptoms—and Glass suggests potential diagnoses ranked by probability, along with evidence-based workup and treatment recommendations. It's designed to enhance diagnostic reasoning rather than answer reference questions.

Doximity GPT

Represents the "network effect" approach. Doximity, the LinkedIn of medicine, acquired Pathway Medical and integrated AI clinical decision support into its existing platform. Since over 80% of U.S. physicians already have Doximity accounts, the tool is available at no additional cost to this built-in user base.

General-Purpose AI Models

You've already explored ChatGPT, Claude, and Gemini in Module 4. These can certainly be used for clinical questions—and many physicians already do so. The advantages include flexibility (they can handle any query format), availability (no healthcare-specific registration required), and broader capabilities (they can help with documentation, communication, and non-clinical tasks too).

The disadvantage is that they're not optimized for medicine. They lack the curated knowledge bases and citation infrastructure of purpose-built clinical tools. They're more likely to hallucinate on specialized clinical questions. And as you learned in the PHI module, using them with patient data requires careful attention to privacy and compliance.

For students without NPI numbers or clinicians who want a Swiss army knife rather than a specialized scalpel, general-purpose models remain valuable. Just remember Module 4's lessons on verification—these tools require more active quality control than medical-specific platforms.


Part 3: Getting Access — A Practical Guide

The path to CDS tool access differs significantly depending on whether you're a practicing clinician, a trainee, or a student.

For Practicing Clinicians with NPI Numbers

If you have an NPI, access to medical-specific CDS tools is straightforward:

OpenEvidence

Go to openevidence.com and register with your NPI number. The platform will verify your credentials automatically. Once verified, you have unlimited access to the full platform at no cost. CME credits are also available for verified NPI users.

Glass Health

Visit glass.health and sign up for a free account. Glass verifies healthcare professional status but has a somewhat broader eligibility than OpenEvidence.

UpToDate

If your hospital or health system has an institutional subscription, access is typically free but requires verification every 90 days while connected to your institution's network. Individual subscriptions are available but expensive.

DynaMedex

Institutional access is common through hospital systems and medical schools. Individual subscriptions start at different price points depending on your role.

For Residents and Fellows

Most training programs provide institutional access to UpToDate and/or DynaMed. Check with your program coordinator or medical library.

For OpenEvidence, your NPI number—which you receive when you start residency—gives you full access.

A Note on CME

OpenEvidence now offers free AMA PRA Category 1 Credits for verified NPI users. This represents genuine value—many CDS tools charge for CME. If you're going to look things up anyway, you might as well earn credit for it.

For Medical Students

Here's where access gets more complicated. You don't have an NPI number yet, which excludes you from some platforms designed for practicing clinicians.

  1. Institutional resources first: Check what your medical school provides. Most institutions have subscriptions to either UpToDate or DynaMed (some have both). Library websites typically list available databases. To maintain remote access, you usually need to verify your account periodically while on the institutional network—set a calendar reminder every 90 days.
  2. OpenEvidence for students: OpenEvidence has expanded access to U.S. medical students, NP students, and PA students. To register, you'll need to enter your school name and expected graduation date, then upload proof of student status—a photo of your student ID, a current transcript, or a signed letter confirming your enrollment.
  3. Glass Health: Students can create accounts and use the platform. The focus on differential diagnosis makes it particularly valuable for clinical rotations.
  4. Consumer AI tools: ChatGPT, Claude, and Gemini require no medical verification. They're immediately available to anyone. This makes them the most accessible option for students wanting to experiment with AI-assisted learning and clinical reasoning.

A Critical Reminder About PHI

PHI Rules Apply

Whatever tools you use, the PHI principles apply:

  • Never enter identifiable patient information into consumer AI platforms. This includes ChatGPT, Claude (web interface), and Gemini consumer versions. None of these have BAAs that would make them HIPAA-compliant for PHI.
  • OpenEvidence requires an organizational BAA for PHI. As of April 2025, OpenEvidence is HIPAA-compliant—but to input protected health information, your organization must sign a (free) Business Associate Agreement. An authorized signatory (physician-owner, CMIO, or compliance officer) can complete this online at openevidence.com. Individual clinicians without an organizational BAA should use de-identified queries only.
  • De-identification works for everyone. You can safely ask about "a 4-year-old male with 3 days of fever, cough, and reduced oral intake" without violating privacy. You cannot safely enter "Johnny Smith, DOB 01/15/2021, MRN 123456."
  • Your institution may have additional policies. Check with your compliance office before using any AI tool with patient information.

Students face a particular temptation: you're learning, you want to explore, and you encounter fascinating cases. Remember that real patients are attached to those details. Build good habits now.

What About BAAs?

A Business Associate Agreement is the legal contract that makes an AI tool HIPAA-compliant for use with PHI. Without a BAA, any PHI you enter into that tool represents a potential HIPAA violation.

Here's the current landscape as of late 2025:

Platform BAA Available? Notes
OpenAI (ChatGPT) API & Enterprise only Not for Plus, Pro, or Team plans. Consumer ChatGPT is not HIPAA-compliant.
Anthropic (Claude) API customers only With zero data retention agreements. Not for Claude.ai consumer plans.
Google (Gemini) Workspace Business Plus+ Through Google's covered services. Gemini-in-Chrome explicitly excluded.
Cloud providers Yes AWS Bedrock, Azure OpenAI, and Google Vertex AI offer faster BAA approval.

For most individual clinicians and students, the practical implication is simple: use the consumer tools for learning and de-identified queries, use purpose-built clinical tools like OpenEvidence for point-of-care questions, and save the BAA complexity for organizational deployments.


Part 4: Using CDS Tools Effectively

Getting access is the easy part. Using these tools effectively requires all the skills you've developed in earlier modules—especially prompting and critical evaluation.

The Prompting Principles Apply

Remember Module 3's core lesson: the quality of your input determines the quality of your output. This applies to CDS tools as much as general AI.

Bad Prompt
"dose of amoxicillin"

Why it fails: The AI has no idea what you're treating, how old the patient is, what formulation you want, whether there are any contraindications, or what level of detail you need.

Better Prompt
"I'm treating a 3-year-old, 15 kg child with acute otitis media. No drug allergies. What's the appropriate amoxicillin dosing, including concentration and frequency?"

Why it works: The AI knows the indication (AOM), the patient parameters (age, weight), the relevant safety information (no allergies), and what you need (full dosing details).

Best Prompt
"3-year-old, 15 kg, with acute otitis media, no prior antibiotic exposure in the past 30 days, no drug allergies, no daycare attendance. Based on current AAP guidelines, what's the appropriate amoxicillin dosing for non-severe AOM? Please include high-dose versus standard-dose considerations."

Why it's best: You've included the clinical details that affect the recommendation (recent antibiotics, daycare exposure), specified the evidence source you want (AAP guidelines), and asked about the clinical decision point (dose escalation criteria).

Context Is Your Superpower

The patient history analogy from Module 3 extends directly to CDS queries. Just as a good history leads to better diagnoses, a good query leads to better AI-assisted answers.

Consider what information actually affects the clinical decision you're asking about. For medication dosing, that's usually age, weight, renal/hepatic function, allergies, and indication. For diagnostic workups, it's presenting symptoms, timeline, risk factors, and pretest probability. For treatment selection, it's prior treatments, contraindications, patient preferences, and resource constraints.

Include this information in your query. The AI can't read your mind or your chart.

Ask Follow-Up Questions

Unlike a textbook, CDS tools can engage in dialogue. Use this.

This iterative approach mirrors how you'd use a human consultant. You wouldn't ask a colleague one question and walk away—you'd have a conversation until you had what you needed.

Request Explicit Uncertainty

One of the most powerful prompting techniques for clinical AI is asking it to tell you what it doesn't know.

AI systems—especially those grounded in evidence bases—can often identify when the literature is sparse, when guidelines conflict, or when your specific clinical scenario falls outside the well-studied population. But they won't volunteer this information unless you ask.


Part 5: Practical Workflows — What CDS Does Well

Let's get concrete. How might CDS tools actually fit into your clinical day?

Differential Diagnosis Enhancement

You've seen a patient with an unusual presentation. You have a differential in mind, but you want to make sure you're not missing something.

Workflow: Summarize the case in a structured format (age, sex, key symptoms, timeline, relevant history), then ask the CDS tool to generate a differential diagnosis. Compare its suggestions to your own list. Pay particular attention to diagnoses in its "can't miss" category that weren't on your radar.

Example Query to Glass Health or OpenEvidence

"67-year-old male, 2-week history of progressive dyspnea, dry cough, low-grade fevers, and 10-pound unintentional weight loss. Ex-smoker (40 pack-years, quit 5 years ago). Recent travel to Arizona. No chest pain. Mild hypoxia on room air. CXR shows bilateral interstitial infiltrates. Please generate a ranked differential diagnosis."

The AI might remind you of coccidioidomycosis (that Arizona travel), which deserves specific testing.

Why this works better than a simple query: Notice how much more useful this is than asking "causes of bilateral infiltrates." The clinical context—smoking history, weight loss, travel, timeline—changes the differential entirely. The AI can't apply the right clinical reasoning if you don't give it the clinical data.

A Common Mistake

Learners sometimes ask CDS tools to confirm their suspected diagnosis rather than genuinely entertaining alternatives. If you ask "Could this be coccidioidomycosis?" you'll get a discussion of cocci. If you ask for a ranked differential with the relevant context, you might discover that lymphoma or cryptogenic organizing pneumonia should also be on your list.

Therapeutic Decision Support

You've made a diagnosis. Now you're choosing between treatment options—and you want to know what the evidence says about your specific clinical scenario.

Example Query

"I'm treating an 8-year-old with newly diagnosed ADHD. Parents are considering medication. The child has a history of anxiety (currently well-controlled without medication) and a family history of cardiac arrhythmia in a first-degree relative. What's the evidence comparing methylphenidate versus amphetamine formulations in terms of efficacy and side effect profiles, and what cardiac screening would you recommend given the family history?"

This gives the AI enough context to address both your primary question (which stimulant) and the safety consideration (cardiac workup).

Quick Reference Questions

Sometimes you just need a fact—a dosing range, a diagnostic criterion, a guideline recommendation. CDS tools excel at these quick lookups.

The key is specificity. Include enough context that the answer will be relevant to your patient, and request the current guideline source when recommendations change frequently.

Literature Synthesis

You've encountered a clinical question where you suspect the evidence has evolved. Rather than reading 15 papers, you want a synthesis.

Example Query

"Summarize the current evidence on SGLT-2 inhibitors in HFpEF. Has the evidence base changed since the DELIVER trial? What's the current strength of recommendation?"

OpenEvidence, with its access to recent NEJM and JAMA content, is particularly strong for this use case.

Patient Communication

You need to explain a diagnosis or treatment to a patient in accessible language.

Example Query

"Please explain diabetic retinopathy to a patient at a 6th-grade reading level, including why regular eye exams matter and what they can do to reduce risk of progression."

You'll still want to personalize the output, but this gives you a foundation to work from.

Pre-Rounding and Preparation

You're about to see a patient you haven't encountered before—a new admission, a transfer from another service, or a complex follow-up. CDS can help you prepare.

Example Query

"I'm about to see a patient with newly diagnosed systemic lupus erythematosus with lupus nephritis class IV on initial biopsy. What are the key things I should assess clinically? What are the common early treatment complications I should monitor for? What questions are patients typically most concerned about?"

This turns CDS into a preparation tool rather than just a reference resource.

Complex Case Brainstorming

Sometimes you're genuinely stuck. The patient doesn't fit a pattern. The workup hasn't revealed an answer. You need a thought partner.

Example Query

"I'm struggling with this case: 52-year-old woman with 6 months of progressive fatigue, intermittent low-grade fevers, and now new-onset bilateral ankle swelling. Labs show normocytic anemia, mildly elevated ESR, and unexplained hypercalcemia. ANA negative. CT chest/abdomen/pelvis unremarkable. I'm stuck between sarcoidosis (despite no pulmonary findings), occult malignancy, and an atypical connective tissue disease. What diagnoses am I not considering? What additional testing might help differentiate? Any historical elements I should go back and explore?"

The AI becomes a sounding board—surfacing possibilities you might have missed and suggesting ways to move forward.

Specialty-Specific Applications

Different specialties lean on CDS tools differently:

The best applications are those where you're already mentally reaching for a reference—CDS just makes the reference faster and more tailored to your specific question.


Part 6: What CDS Doesn't Do Well

No tool is perfect. Knowing the limitations of CDS helps you use it appropriately.

Rare Conditions and Edge Cases

AI models perform best on common conditions where training data is abundant. When you encounter a rare disease or an atypical presentation, the AI may give you generic guidance that doesn't address your specific situation—or it may hallucinate plausible-sounding recommendations that aren't grounded in evidence.

Mitigation: When dealing with rare conditions, use CDS as a starting point, then verify recommendations against specialty resources or human experts. Ask the AI directly: "How common is this condition, and how confident are you in these recommendations?"

Rapidly Evolving Evidence

CDS tools have knowledge cutoffs. OpenEvidence and similar platforms update continuously, but there's always a lag between a practice-changing trial and its incorporation into recommendations. For questions where the evidence might have changed in the past few months, search for the primary literature or check society guideline updates.

Complex Multi-System Disease

Patients with multiple comorbidities often fall between guideline recommendations. The COPD guidelines and the heart failure guidelines might give conflicting medication recommendations. CDS tools generally present each condition's guidance without reconciling conflicts.

Mitigation: Ask specifically about interactions: "How should I balance beta-blocker therapy in a patient with both COPD and HFrEF? Are there evidence-based approaches to this common clinical tension?"

Clinical Judgment and Context

CDS tools can tell you what the evidence says. They can't weigh that evidence against everything else you know about this patient—their values, their living situation, their support system, their priorities. That integration remains your job.

A tool might correctly recommend a medication, but if the patient can't afford it, if they've tried it before with intolerable side effects, or if they have cognitive impairment that makes complex regimens inadvisable, the recommendation doesn't help.

False Confidence

Perhaps the most dangerous failure mode is the AI that sounds certain when it shouldn't be. You learned this in Module 1: language models are trained to produce fluent, confident-sounding text regardless of whether the underlying content is correct.

Mitigation: Build the habit of verification. Check key recommendations against primary sources. Ask the AI about its confidence level and the quality of underlying evidence. Maintain your own clinical reasoning rather than outsourcing it entirely.

Semantic Drift: A Real-World Example

In December 2025, Dr. Olivia Milgrom documented a striking example of how even well-designed medical AI can fail. A popular clinical AI tool confidently recommended dose reduction for oxacillin in dialysis patients—advice that could lead to subtherapeutic antibiotic levels in critical situations. The correct answer: oxacillin does not require renal dose adjustment.

How did this happen? Dr. Milgrom traced the error to what she calls a "semantic drift perfect storm":

As Dr. Milgrom wrote: "The drug label's conservative precautionary language doesn't capture this nuanced reality. It's written for the worst-case scenario, not the typical clinical case. The AI interpreted it as an absolute requirement for all dialysis patients. This is the constraint vs. real-world scenario mismatch. And medical knowledge is full of exceptions, contexts, and constraints."

The Lesson

Even when a CDS tool cites legitimate sources, semantic drift can transform cautious hedging into confident recommendations. The AI may not hallucinate the reference—but it can hallucinate the interpretation. This is why verification matters most precisely when the AI sounds most confident and cites its sources.

Verification in Practice

What does verification actually look like in a busy clinical day?

A Practical Heuristic

The stakes of the decision should match the rigor of your verification. A suggestion for a handout to give patients needs less scrutiny than a recommendation for an off-label medication in a high-risk population.


Part 7: Building Good Habits

The clinicians who get the most value from CDS tools share certain practices.

Use CDS Before You're Stuck

The natural tendency is to reach for decision support only when you're uncertain. But CDS tools can also confirm that you're on the right track, remind you of steps you might forget under time pressure, and surface considerations you hadn't thought of.

Using CDS routinely—not just when you're lost—helps you catch errors before they happen and exposes you to the current literature even on familiar topics.

Document Your Reasoning

If a CDS tool influenced your clinical decision, consider documenting that in your note. "Plan reviewed with OpenEvidence; recommendations consistent with AAP guidelines for acute otitis media." This creates a record of your reasoning, demonstrates that you sought evidence, and helps future clinicians understand your thought process.

Stay Current on Tool Capabilities

These tools are evolving rapidly. OpenEvidence's capabilities in late 2025 are dramatically different from its capabilities even a year earlier. What you learned in training may be outdated by the time you're practicing independently.

Check for new features periodically. Read the release notes or announcement pages. Try the same query on different platforms to see which works best for your specialty.

Develop Your Own Prompting Templates

Over time, you'll notice that certain query structures work well for your common use cases. Save these as templates—whether in a notes app, your phone, or your own memory.

A pediatrician might have a template for "fever without source" workups. An internist might have one for polypharmacy reconciliation. An emergency physician might have templates for each of the common chief complaints.

Maintain Your Own Knowledge

CDS is a complement, not a replacement, for your own expertise. The clinician who knows nothing and relies entirely on AI will miss nuances, fail to ask the right questions, and struggle when the technology fails.

Continue reading primary literature in your specialty. Attend conferences. Discuss cases with colleagues. Build your pattern recognition. The goal is a synthesis: human expertise augmented by AI capability, each making the other stronger.


Part 8: The Future of CDS

Where is this headed? Several trends seem likely:

Deeper EHR Integration

The current experience—switching from your EHR to a separate app or browser tab—is clunky. The future likely involves CDS embedded directly in documentation workflows. You're writing a note, and the AI proactively surfaces relevant guidelines. You're ordering a medication, and the AI suggests dose adjustments based on the patient's renal function already in the chart.

Some health systems are already piloting these integrations. The challenge is ensuring that "helpful" doesn't become "intrusive"—alert fatigue is already a major problem with current CDSS implementations.

Ambient Intelligence

The next generation of CDS may not require you to type anything. Ambient listening technology can capture the patient encounter and generate CDS prompts automatically. "Based on the symptoms discussed, consider the following differential..." This is already emerging in documentation tools like Nuance DAX and AWS HealthScribe; extending it to decision support is a natural progression.

Patient-Facing AI

Today's CDS is designed for clinicians. But patients are already using general-purpose AI for health questions. Purpose-built patient-facing AI—with appropriate guardrails—could help patients prepare for visits, understand their diagnoses, and follow treatment plans.

This raises important questions about the role of physicians when patients arrive having already consulted an AI. The optimistic view: informed patients lead to better shared decision-making. The concerning view: miscalibrated AI advice could lead patients astray or damage trust. Likely both are true, depending on implementation.

Continuous Learning

Current CDS tools learn from the literature but not from clinical outcomes. Future systems may close this loop: tracking whether recommendations led to good patient outcomes and adjusting accordingly. This raises challenges around data governance and algorithmic transparency, but it could dramatically accelerate evidence synthesis.


Resources for Continued Learning

Getting Started

OpenEvidence

Website: openevidence.com
Access: Free for NPI holders; student registration available
Mobile: iOS and Android apps
Strength: Evidence synthesis with NEJM/JAMA partnerships

Glass Health

Website: glass.health
Access: Free account registration
Strength: Differential diagnosis and clinical plan generation

UpToDate

Website: uptodate.com
Access: Institutional subscriptions; individual available
New: Expert AI conversational interface
Strength: Curated, authoritative content

DynaMedex

Website: dynamedex.com
Access: Institutional or individual ($149/year students)
Strength: Systematic evidence grading, multiple daily updates

Podcasts & Videos

The Heart of Healthcare Podcast, December 2025 · OpenEvidence co-founder Zack Ziegler on building AI that helps clinicians make better decisions without replacing their judgment. Essential listening for understanding the philosophy behind modern CDS.
Stat AI: Emergency Medicine Podcast, Episode 10, February 2025 · Dr. Travis Zack and Varun Mangalick discuss how AI tools assist physicians in real-time diagnosis and treatment planning.
Ongoing podcast series · Raj Manrai and Andrew Beam host conversations with experts at the intersection of AI and medicine. Check for episodes on clinical decision support and diagnostic AI.
KevinMD Podcast, November 2025 · Cardiologist Saurabh Gupta on why algorithms influencing clinical care must meet rigorous standards—essential context for critically evaluating CDS tools.

Articles and Research

"An overview of clinical decision support systems: benefits, risks, and strategies for success"
npj Digital Medicine, 2020 · Comprehensive review of CDSS implementation
SAGE Open Medicine, 2025 · Recent study evaluating OpenEvidence in primary care—rated high in clarity, relevance, and evidence-based support
"Toward a responsible future: recommendations for AI-enabled clinical decision support"
JAMIA, 2024 · AMIA consensus recommendations on trustworthy AI-CDS
STAT News, October 2025 · Coverage of UpToDate's entry into AI-powered CDS and physician reactions
Olivia Milgrom, MD, MPH · December 2025 · Essential case study of how CDS tools can misinterpret sources—even when citing them correctly

Quick Reference: Key Points

What CDS Tools Are

  • Technology that delivers evidence-based guidance at the point of care
  • Range from curated knowledge bases (UpToDate) to AI-powered assistants (OpenEvidence, Glass)
  • Designed to augment, not replace, clinical judgment

How to Get Access

  • NPI holders: OpenEvidence free; Glass free; UpToDate requires subscription
  • Students: Check institutional access first; OpenEvidence allows student registration
  • Remember: No consumer AI tool is HIPAA-compliant for PHI

How to Use Effectively

  • Apply prompting principles: context, specificity, explicit uncertainty requests
  • Include relevant clinical details in queries
  • Ask follow-up questions; engage in dialogue
  • Verify key recommendations against primary sources

What CDS Does Well

  • Differential diagnosis enhancement
  • Therapeutic decision support
  • Quick reference lookups
  • Literature synthesis
  • Patient communication drafting

What CDS Doesn't Do Well

  • Rare conditions and edge cases
  • Rapidly evolving evidence
  • Complex multi-system disease reconciliation
  • Clinical judgment and contextual factors
  • Flagging its own uncertainty

Building Good Habits

  • Use CDS proactively, not just when stuck
  • Document AI-assisted reasoning when appropriate
  • Stay current on tool capabilities
  • Develop personal prompting templates
  • Maintain your own clinical expertise

Summary

Clinical decision support has evolved from ambitious but brittle expert systems of the 1970s to today's AI-powered tools that can synthesize millions of papers into actionable guidance. The tools aren't perfect—they struggle with rare conditions, can't replace clinical judgment, and sometimes project false confidence. But used well, they can make you a better clinician.

The key is integration. CDS tools are most valuable when they augment your existing expertise rather than replacing your thinking. Good prompting matters—the clinical communication skills you've spent years developing transfer directly to AI interaction. And critical evaluation remains essential—trust the tools enough to learn from them, but verify enough to catch their errors.

The clinicians who thrive in the AI era won't be those who resist the technology or those who surrender their judgment to it. They'll be the ones who learn to collaborate with it—bringing human expertise to questions that require judgment while leveraging AI capabilities for evidence synthesis and pattern recognition.

Learning Objectives

  • Trace the evolution of clinical decision support from expert systems to AI-powered tools
  • Compare traditional knowledge bases, AI clinical assistants, and general-purpose models
  • Navigate access pathways for CDS tools as a clinician, trainee, or student
  • Apply effective prompting strategies to clinical queries
  • Identify appropriate use cases and limitations of CDS tools
  • Develop habits for integrating CDS into clinical workflow responsibly