About This Guide
Why AI literacy matters for clinicians and medical learners.
The Problem We're Addressing
AI is entering healthcare faster than medical education is adapting. Clinicians encounter AI tools with little framework for evaluating them, supervising their outputs, or understanding their failure modes. The result is a dangerous gap: systems deployed without adequate oversight, clinicians unsure when to trust AI recommendations, and institutions struggling to govern technology they don't fully understand.
This guide addresses that gap—not by making you into an AI engineer, but by developing the critical thinking skills needed to be a sophisticated consumer and supervisor of clinical AI.
Who This Is For
- Clinicians in practice who encounter or will soon encounter AI tools in their clinical work and want to use them effectively
- Medical students and residents building foundational AI literacy as part of their training
- APP students (PA, NP, and other advanced practice programs) preparing for AI-augmented practice
- Anyone in healthcare who needs to evaluate AI systems, explain AI to patients, or think critically about these tools
No technical background is required. We assume clinical knowledge and build AI literacy on that foundation.
Core Philosophy
The goal is not to master AI technology, but to become an adaptive practitioner—someone who can fluidly shift between integrating AI assistance and exercising independent clinical judgment, depending on the context.
We draw a distinction between Centaur and Cyborg modes of human-AI collaboration. Centaurs maintain clear division of labor; Cyborgs integrate more seamlessly. Both have their place. The adaptive practitioner knows when each mode is appropriate and can shift between them deliberately.
How This Guide Works
Self-Paced Learning
Work through the modules at your own speed. There are no deadlines, no synchronous sessions, no cohort to keep up with. Spend more time on topics that interest you; skim what you already know.
Suggested Sequence
The modules are ordered to build on each other: foundations first (how AI systems work and fail), then generative AI (large language models and their supervision), then governance (liability, institutional policy, patient communication). But feel free to jump around based on your needs.
Primary Sources
We prioritize original research over textbooks. You'll read the actual Obermeyer paper on algorithmic bias, the CLAIM Checklist, the Med-PaLM publications—not summaries of summaries. This develops the skill of engaging directly with research evidence.
Hands-On Activities
Most modules include practical exercises: using NotebookLM, evaluating AI papers, crafting prompts. The goal is to apply concepts immediately, not just read about them.
What This Guide Is Not
- Not a technical AI course. We don't teach you to build machine learning models. We teach you to evaluate, supervise, and work with them.
- Not a comprehensive survey. We don't cover every AI application in healthcare. We focus on core concepts that transfer across domains.
- Not a vendor certification. We're not training you to use specific products. We're developing judgment that applies to any AI system you encounter.
- Not a credential. There are no CME credits, certificates, or formal assessments. The value is in what you learn, not what you earn.
Time Investment
Each module takes roughly 2-4 hours to complete, depending on how deeply you engage with the readings and activities. The full guide represents about 30-40 hours of learning—but there's no pressure to complete it all. Even one or two modules will give you useful frameworks.
How This Course Was Made
This course about AI was built with AI. That's not a confession—it's the point.
The Process
Every module you've read emerged from a collaboration between a physician (me) and a large language model (Claude, made by Anthropic). The workflow looked something like this:
- I created an outline based on what I thought clinicians actually needed to know
- I wrote a detailed prompt describing the audience, tone, and learning objectives
- Claude generated a draft
- I reviewed, pushed back, asked for revisions, cut what didn't work, and added what was missing
- Repeat until it felt right
Replit helped format and publish the website. The whole thing came together in a fraction of the time it would have taken me to write alone—and, I'd argue, with better results than either of us could have produced independently.
What This Means for You
This course argues that AI is a tool—powerful, useful, and only as good as the human directing it. The course itself is evidence of that claim.
The modules exist because I knew what questions to ask, what structure would serve learners, and what needed to be cut or rewritten. They're better than what I'd have written alone because AI can draft quickly, suggest framings I wouldn't have considered, and maintain consistency across sections in ways that are tedious for humans.
Neither of us could have done this alone. Both of us were necessary.
That's the partnership this course is trying to help you build with AI in your own work. Not replacement. Not magic. Just a capable collaborator that requires your judgment to be useful—and your vigilance to be safe.
About the Author
Michael Hobbs, MD
I'm a pediatrician and founder of Lakes Pediatrics, and I consult with health tech teams trying to build products that actually work for clinicians and patients. After 25 years in practice—and advising both startups and established companies on AI—I've learned to translate between "startup speak" and what happens in the clinic. I watched healthcare fumble computers and connectivity for two decades, and I'm determined to help us get AI right.
If you'd like to support this project, I can always use tokens to power Echs and help build more tools.
Acknowledgments
This guide was developed with input and inspiration from colleagues working at the intersection of medicine and AI:
Feedback
Found an error? Have a suggestion? Questions about the process? Get in touch →