So...What Next?
You've built a foundation. Now it's time to find your path, keep learning, and start building things that matter.
Start. Today. Don't wait until you feel ready. Don't wait until you've read one more article or taken one more course. The best way to learn AI is to use AI. Build something small. Break it. Fix it. Build something else. The people who will thrive in an AI-augmented world are the ones who started before they felt qualified.
Congratulations—Now the Real Work Begins
If you've made it through this curriculum, you now understand more about AI in healthcare than most of your colleagues. You know how language models think (and fail to think). You understand the privacy landscape. You can evaluate clinical AI tools critically. You've seen the promise and pitfalls of AI-powered documentation and decision support.
But here's the uncomfortable truth: everything you just learned has a half-life.
The field is moving at a pace that makes even experts feel like perpetual beginners. GPT-4 was released in March 2023. By the time you read this, we'll be multiple generations beyond it—and whatever comes next will probably surprise us. The specific tools, the specific capabilities, the specific limitations—all of these are shifting under our feet.
So what's the point of learning any of this if it's going to change?
Because the fundamentals transfer. Critical evaluation skills don't expire. Understanding how to supervise AI outputs works across generations. Knowing how to think about privacy, bias, and clinical integration—these are durable skills even as the technology evolves.
Your job now isn't to "finish" learning about AI. It's to build habits for continuous learning so you can evolve alongside the technology.
Accept the Speed (Or Get Left Behind)
Let's be honest about what you're up against. In 2025 alone:
- OpenAI released o3, o4-mini, GPT-5 (with Mini and Nano variants), then GPT-5.1, and now GPT-5.2—all in one year
- Anthropic shipped Claude Opus 4 and Sonnet 4 in May, then Sonnet 4.5 in September, then Opus 4.5 in November—three major releases in six months
- Google launched Gemini 2.5 Pro and Flash, then Gemini 3 Pro and Deep Think by November
- DeepSeek released R1 in January (rivaling GPT-4 at a fraction of the cost), then V3.1 and V3.2 through the fall
- AI agents went from demos to production—computer use, coding assistants, research tools that actually work
- Every major EHR vendor shipped or expanded AI documentation features
And that's just the major releases from major players. The startup ecosystem, the research labs, the open-source community—all moving at similar velocity.
You can't keep up with everything. No one can. The goal isn't comprehensive knowledge—it's developing an efficient information diet and learning how to quickly evaluate what matters for your work.
Build Your Information Diet
You need reliable sources that filter the noise without missing the signal. The resources below are some of my favorites—they're a good place to start, but you'll find new ones you love as you develop your own information diet.
Subscribe to 2-3 of these. Skim them weekly. Most items won't be relevant to you—that's fine. You're looking for the occasional piece that changes how you think or points to something you should try.
The Lenny's Pass: A Shortcut Worth Knowing About
One resource deserves special mention. Lenny's Newsletter offers something called "Lenny's Pass" (or "Product Pass") to annual subscribers that's genuinely remarkable for anyone wanting to experiment with AI tools.
For $200-350/year, annual subscribers get free year-long access to many of the most popular AI-powered tools in the world—tools that would cost thousands of dollars individually:
- Cursor — The AI-native code editor we mentioned in Vibe Coding
- v0 — Vercel's AI that generates React components from descriptions
- Replit — Browser-based coding environment with AI assistance
- Lovable — AI-powered app builder
- Bolt — Another AI app builder for rapid prototyping
- Granola — AI notetaker for meetings
- Notion AI — AI features in the popular workspace tool
- Perplexity — AI-powered search and research
- And many more...
The catch: you need to be a new customer of each product to get the free year, and some popular tools sell out. The $350 "Insider" tier guarantees access to everything.
This isn't an endorsement—it's a practical observation. If you're planning to experiment with AI development tools anyway, this bundle could save you significant money while giving you access to experiment with multiple platforms. The newsletter itself is also genuinely excellent for understanding how tech companies think about product development and AI integration.
Find Your Niche
Here's the liberating secret: you don't need to be an AI generalist.
The person who becomes known as "the one who figured out how to use AI for [specific clinical workflow]" will be more valuable than someone with broad but shallow AI knowledge. Depth beats breadth, especially in a field this complex.
Think about what frustrates you most in your daily work:
- Is it documentation? Go deep on ambient AI and learn how to optimize it for your specialty.
- Is it patient communication? Build custom GPTs that handle your most common questions.
- Is it keeping up with literature? Master NotebookLM and AI-powered research workflows.
- Is it practice management? Explore how AI can handle scheduling, referrals, prior auth.
- Is it education? Look at how AI can enhance teaching, build simulations, create learning materials.
Pick one area. Spend 90% of your AI learning time there. Become the expert in that niche. Then expand.
A pediatric hospitalist who deeply understands AI documentation will contribute more to their field than an administrator who superficially understands all AI applications. Your clinical expertise is the differentiator—combine it with focused AI knowledge and you become genuinely valuable.
Learning Resources: From Foundations to Frontiers
The foundation you've built here should make everything below more accessible. Pick based on your goals.
For Deepening Technical Understanding
For Learning to Evaluate AI Systems: The Critical Skill
Evals—short for evaluations—is the emerging discipline of rigorously testing what AI systems can and can't do. This might sound academic, but it's arguably the most important skill for anyone deploying AI in clinical settings. Here's why.
When a vendor tells you their AI is "95% accurate," what does that actually mean? Accurate at what? On what population? Measured how? Under what conditions? The answers to these questions determine whether the tool will help or harm your patients.
Consider a simple example: An AI scribe claims "95% accuracy" in generating clinical notes. But what if:
- The 5% errors are concentrated in medication dosing and allergy documentation?
- The accuracy drops to 70% for patients with accents the training data didn't include?
- The system was evaluated on 15-minute primary care visits but you're using it for 45-minute psychiatric intakes?
- "Accuracy" was measured by whether the note contained the right information, not whether it omitted critical details?
Understanding evals means understanding what questions to ask, how to interpret the answers, and—increasingly—how to run your own evaluations before deploying AI in your practice.
Think like a skeptical scientist, not a hopeful adopter. Every AI claim should trigger the question: "How would I test that?" The clinicians who will navigate AI safely are those who habitually ask for evidence and understand what that evidence actually shows.
Core Concepts in AI Evaluation
You don't need to become an evaluation scientist, but understanding a few key concepts will make you a much more sophisticated consumer of AI tools:
- Benchmarks vs. Real-World Performance: AI models are often evaluated on standardized benchmarks (like medical licensing exams), but benchmark performance doesn't always translate to clinical utility. A model that aces USMLE questions might still give dangerous advice to a real patient with atypical presentation.
- Distribution Shift: AI systems perform best on data that looks like their training data. When your patient population, documentation style, or clinical workflow differs from what the AI was trained on, expect degraded performance. This is why an AI trained on academic medical center data might fail in a rural clinic.
- Failure Modes: It's not enough to know how often AI is right—you need to know how it fails. Does it fail safely (expressing uncertainty) or dangerously (confidently wrong)? Does it fail randomly or systematically on certain patient populations?
- Task-Specific Evaluation: An AI that's excellent at summarizing notes might be terrible at identifying drug interactions. Evaluate each use case separately; don't assume competence transfers.
- Longitudinal Drift: AI performance can degrade over time as the world changes but the model doesn't. New drug names, updated guidelines, emerging diseases—static models become stale. Ongoing evaluation is essential, not just initial testing.
Learning to Run Your Own Evals
The practical application: Before deploying any AI tool in your practice, create a test set of challenging cases from your own patient population. Run the AI on these cases. Have colleagues blindly evaluate the outputs. Document failures. This isn't academic—it's basic due diligence that most healthcare organizations skip.
If You Want to Build: Developer Foundations
Vibe coding is real and powerful—but the most capable builders understand at least the basics of what's happening under the hood. You don't need to become a software engineer, but some foundational skills will multiply your effectiveness dramatically.
Think of it like learning to drive. You don't need to be a mechanic, but understanding that the car needs gas, how brakes work, and what that warning light means makes you a much better driver. The same applies to building with AI.
Git: Your Safety Net
Git is version control—it lets you save snapshots of your work and roll back when things break. Every serious software project uses it. Even if you're vibe coding, understanding Git means:
- You can save working versions before experimenting with risky changes
- You can undo catastrophic AI-generated changes with a single command
- You can collaborate with others (or with AI agents) without losing work
- You can deploy your projects to the web for free via GitHub Pages
- You can see exactly what changed between versions—essential for debugging
GitHub (the most popular Git hosting service) also gives you free website hosting, issue tracking, and collaboration tools. Learning the basics takes an afternoon and pays dividends forever.
git status — See what's changed
git add . — Stage all changes
git commit -m "message" — Save a snapshot with a description
git push — Upload to GitHub
git pull — Download changes from GitHub
That's it. You can learn more commands later, but these five will handle 90% of what you need.
The Command Line: Where Power Lives
The command line (terminal, shell, CLI—same thing) feels intimidating but is simpler than you think. It's just typing commands instead of clicking buttons. Why bother?
- Most serious AI tools work best (or only) from the command line
- Automation becomes possible—run tasks on schedule, process multiple files at once
- You can use AI coding assistants like Claude Code that work in terminal environments
- Many problems are solved with a single command that would take dozens of clicks
- It's how developers actually work, so learning it unlocks their tutorials and resources
How to open the terminal: On Mac, search for "Terminal" in Spotlight (Cmd+Space) or find it in Applications → Utilities. On Windows, search for "PowerShell" or "Windows Terminal" in the Start menu. You'll see a blank window with a blinking cursor—that's it. You're in.
Navigation:
cd foldername — Change directory (go into a folder)
cd .. — Go up one level
cd ~ — Go to your home directory
ls — List files in current folder (dir on Windows)
pwd — Print working directory (shows where you are)
File operations:
mkdir foldername — Create a new folder
touch filename — Create a new empty file
cp file1 file2 — Copy a file
mv file1 file2 — Move or rename a file
rm filename — Delete a file (careful—no trash can!)
Useful extras:
clear — Clear the screen when it gets cluttered
history — See your recent commands
ctrl+c — Stop whatever's currently running
Open Terminal and try navigating around. You can't break anything just by looking.
Pro tip: When you're learning, have an AI chat window open alongside your terminal. When you get stuck or see an error, paste it into Claude or ChatGPT and ask what went wrong. This is how most developers actually work—nobody memorizes everything.
APIs: How Software Talks to Software
An API (Application Programming Interface) is how different software systems communicate with each other. When you use ChatGPT through the website, you're using their interface. When you build your own tool that uses GPT, you use their API—a structured way to send requests and receive responses programmatically.
Understanding APIs matters because:
- It's how you integrate AI into your own applications
- API access is often cheaper than chat subscriptions for high-volume use
- APIs give you more control—you can customize behavior, process data in bulk, and build workflows
- Many AI tools (Claude Code, automation platforms, custom apps) use APIs under the hood
How APIs Work (The Simple Version)
Think of an API like a restaurant. You (the customer) don't go into the kitchen—you give your order to a waiter (the API), who brings it to the kitchen (the server), and returns with your food (the response). The menu tells you what you can order and how to order it (the documentation).
In practice, using an AI API means:
- You send a request containing your prompt and settings
- The AI processes it on their servers
- You receive a response containing the AI's output
- You pay based on usage (typically per token—roughly per word)
API Keys: Your Access Credential
To use an API, you need an API key—a long string of characters that identifies you and authorizes your requests. It's like a password that lets the service know who's making requests and who to bill.
API keys are sensitive. Anyone with your key can make requests and you'll be charged. Never share them publicly, don't commit them to GitHub, and don't paste them into chat windows. Store them securely—most developers use environment variables or secret management tools.
Getting API Keys from Major Providers
Your First API Call
Once you have an API key, the simplest way to test it is through each provider's "playground" or documentation. But if you want to use it programmatically, here's what a basic API call looks like in Python (the most common language for AI work):
# Example: Calling Claude API (simplified)
import anthropic
client = anthropic.Anthropic(api_key="your-api-key-here")
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain APIs in one paragraph."}
]
)
print(message.content)
You don't need to understand every line right now. The point is: it's not magic. You're sending a message, specifying which model to use, and getting a response back. AI coding assistants can help you write and modify code like this—you describe what you want, they handle the syntax.
Understanding How Web Apps Work
If you want to build tools that run in a browser (rather than just on your local machine), it helps to understand the basic architecture:
- Frontend: What users see and interact with. HTML defines structure, CSS defines styling, JavaScript makes it interactive. Modern frameworks like React or Vue make this easier.
- Backend: The server that processes data, talks to databases, handles authentication. This is where sensitive logic lives. Python (with Flask or FastAPI) is popular and readable.
- Database: Where data persists. User accounts, saved preferences, application state. SQL databases (PostgreSQL, MySQL) are the workhorses; "NoSQL" databases (MongoDB) are more flexible.
- APIs: How different systems talk to each other. When your frontend needs data from your backend, it makes an API call. When you integrate with OpenAI or Anthropic, you use their APIs.
You don't need to master all of this—that's what AI coding assistants are for. But understanding the concepts helps you communicate with the AI and debug when things go wrong.
AI-Powered Development Tools
These are the tools that make building accessible to non-engineers. The first three deserve special attention—they're genuinely transformative, but that power comes with real responsibility.
Cursor: The AI Code Editor
Cursor is where most people should start. It looks and feels like VS Code (the most popular code editor), but with AI deeply integrated. You can highlight code and ask "what does this do?" You can describe a feature and watch it write the implementation. You can paste an error message and get a fix.
The "Composer" feature is particularly powerful—it can make coordinated changes across multiple files, understanding how your codebase fits together. For building real applications, this is transformative.
Cursor can run commands, modify files, and execute code. When it's working well, this is magical. When it makes a mistake, it can break things quickly. Always use Git so you can roll back. Review changes before accepting them. Start with small projects where mistakes are cheap. The AI is confident even when it's wrong—your job is to verify.
Claude Code and Codex: AI in the Command Line
These tools are more powerful than Cursor in some ways—they operate directly in your terminal, can run shell commands, and handle complex multi-step tasks autonomously. You describe what you want ("add user authentication to this app" or "find and fix the bug causing the test to fail"), and they figure out the steps: reading files, making changes, running tests, iterating until it works.
Both are excellent. Claude Code tends to be more thorough and careful; Codex can be faster and more aggressive. Try both and see which one matches how you think. Many developers use both depending on the task.
These tools can delete files, run commands, modify your system. They're operating with your permissions. A misunderstood instruction can cascade into significant damage before you notice. Always work in a Git repository. Always review what the agent is doing. Start with low-stakes projects. The productivity gains are real, but so is the risk of an AI confidently executing the wrong plan.
App Builders and Infrastructure
A Learning Path for the Ambitious
If you want to go beyond vibe coding to actually understanding what you're building:
- Week 1-2: Learn Git and GitHub basics. Create a repository, make commits, push to GitHub.
- Week 3-4: Get comfortable with the command line. Navigate, create files, run scripts.
- Week 5-6: Build something simple with Cursor or Bolt. A personal website, a to-do app, anything.
- Week 7-8: Learn basic HTML/CSS by customizing what the AI generated. Understand how to make changes yourself.
- Week 9-10: Add a backend with Supabase. User authentication, saving data, the basics of a real application.
- Week 11-12: Deploy to Vercel or Netlify. Put something on the internet that you built.
By the end of three months, you'll understand enough to have intelligent conversations with developers, evaluate technical proposals, and build increasingly sophisticated tools with AI assistance.
The Next Frontier: AI Agents
Everything we've discussed so far involves AI that responds to your prompts. You ask, it answers. You supervise, it executes. The emerging wave is fundamentally different: AI that takes action autonomously—agents that can browse the web, execute code, call APIs, send messages, and complete multi-step tasks without constant supervision.
This is where things get both exciting and genuinely concerning for healthcare.
What Are AI Agents, Really?
An AI agent is a system that can:
- Perceive: Take in information from the environment (emails, databases, APIs, sensors)
- Plan: Break complex goals into subtasks and decide what to do next
- Act: Execute those plans by calling tools, writing code, sending messages, or modifying systems
- Learn (sometimes): Adjust behavior based on feedback and outcomes
The key difference from a chatbot: agents operate in loops without waiting for human approval at each step. You give them a goal, and they figure out how to achieve it.
The Promise: Healthcare Scenarios
Imagine AI agents that could:
- Panel Management: Monitor your entire patient panel overnight, identifying patients who missed medications, skipped appointments, or have lab results that need attention—then draft appropriate outreach for your review in the morning.
- Prior Authorization: Handle the entire prior auth process—pulling relevant documentation, filling out forms, submitting requests, tracking status, and escalating denials— only involving you when clinical judgment is needed.
- Literature Surveillance: Monitor PubMed and preprint servers for papers relevant to your specialty, summarize findings, and alert you when something might change your practice.
- Care Coordination: Track referrals, ensure follow-up appointments are scheduled, verify imaging results are received, and flag when care plans aren't being executed.
- Documentation Completion: After your visit, automatically pull in relevant outside records, reconcile medication lists, update problem lists, and draft care gaps assessments.
These aren't science fiction—early versions exist today. The question is whether we can deploy them safely.
The Peril: What Could Go Wrong
AI agents amplify both capabilities and risks:
- Compounding Errors: When agents operate in loops, a small error in step 1 can cascade through steps 2-10. By the time a human notices, significant damage may be done.
- Misaligned Goals: An agent optimizing for "efficient patient outreach" might send overwhelming numbers of messages, contact patients about sensitive diagnoses inappropriately, or prioritize metrics over actual patient benefit.
- Reduced Oversight: The whole point of agents is to operate without constant supervision. But "without constant supervision" can quickly become "without adequate supervision."
- Authorization Creep: Agents need permissions to act. A prior auth agent needs access to patient records and insurer portals. A documentation agent needs EHR write access. Each permission creates attack surface and potential for misuse.
- Liability Uncertainty: When an agent makes a mistake, who's responsible? The clinician who deployed it? The vendor who built it? The institution that approved it? We don't have clear answers yet.
The more capable agents become, the less we can afford to supervise every action. But the more autonomous they become, the more important supervision is. Healthcare will need to develop new frameworks for "appropriate autonomy"—not the binary choice between "human does everything" and "AI does everything," but graduated trust based on task risk, agent capability, and verification mechanisms.
Current Agent Technologies
If you want to start experimenting with agents (in low-stakes settings), here's the landscape:
Getting Started Safely
If you want to experiment with agents, start with:
- Non-clinical tasks first: Build agents for scheduling, email triage, literature searches—anything where mistakes are annoying, not dangerous.
- Human-in-the-loop always (for now): Configure agents to propose actions rather than execute them. You approve, then the agent acts.
- Narrow scope: An agent that does one thing well is safer than a general-purpose agent that can do everything poorly.
- Audit logging: Every action the agent takes should be logged. You need to be able to reconstruct what happened and why.
- Kill switches: Always have a way to shut down an agent immediately. Automated monitoring that triggers alerts (and potentially automatic shutdown) when behavior seems anomalous.
The foundational knowledge you've built—especially around supervision, evaluation, bias, and governance—becomes even more critical as AI moves from "tool I use" to "system that acts on my behalf."
AI Browsers: A Glimpse of the Agentic Future
2025 saw the emergence of "agentic browsers"—web browsers with built-in AI that can take actions on your behalf. These are worth knowing about, even if they're not ready for clinical use.
The appeal is obvious: tell the browser "find me the cheapest flight to Chicago next Tuesday" and watch it navigate multiple airline sites, compare prices, and present options. Or "summarize all the tabs I have open about this clinical trial."
These tools are genuinely impressive for personal productivity, but they carry significant risks for anything involving patient information:
- Prompt injection attacks: Malicious websites can embed hidden instructions that hijack the AI agent, potentially causing it to leak information, navigate to phishing sites, or take unintended actions. This is an unsolved security problem.
- Data exposure: When an AI browser "reads" a page for you, that content may be processed by external servers. Browser memories and context features raise questions about what data is retained and how it's used.
- Unpredictable behavior: Agent mode can click links, fill forms, and navigate autonomously. In a healthcare context, unintended actions could have serious consequences.
- Audit gaps: Traditional browsers leave clear audit trails. Agentic browsers that take autonomous actions make it harder to reconstruct what happened and why.
The bottom line: AI browsers are fun to explore for personal use—booking travel, shopping, research, organizing information. They offer a real preview of where human-computer interaction is heading. But keep them far away from anything involving PHI or clinical decision-making until the security and privacy landscape matures significantly.
Your First Projects: Where to Start
Theory is necessary but insufficient. You need to build things. Here are concrete first projects organized by complexity and risk level:
Level 1: Zero Risk, High Learning
These projects involve only your own data or synthetic data. No patient information, no clinical decisions.
- Personal Literature Workflow: Set up NotebookLM with 5-10 papers from your specialty. Practice having it synthesize across sources, identify contradictions, and generate podcast-style summaries. Document what works and what doesn't.
- Study Aid Generator: Use Claude or ChatGPT to create flashcards, practice questions, or case vignettes from textbook material you already own. Experience the AI as a learning tool.
- Meeting Summarizer: Record a non-confidential meeting (staff meeting, journal club) and run it through Granola or another AI notetaker. Compare the AI summary to your own notes. Where does it add value? Where does it fail?
- Personal Productivity Agent: Use n8n or Zapier to connect your email, calendar, and a language model. Build a simple agent that summarizes your daily emails or suggests time blocks for focused work. Learn agent architecture on low-stakes tasks.
Level 2: Professional Context, Personal Use
These projects involve your professional life but not direct patient care. Still relatively low risk.
- Grant or Protocol Assistant: Feed a grant RFA or study protocol into Claude and iterate on specific aims, background sections, or methods. You maintain full control; the AI is just a drafting partner.
- Teaching Material Generator: Create patient education handouts, resident teaching scripts, or presentation slides using AI. Practice prompt engineering for medical accuracy and appropriate reading level.
- Documentation Template Builder: Design templates for your most common note types. Use AI to draft initial versions, then refine them based on what actually works in your workflow.
- Custom GPT for Your Specialty: Build a Custom GPT loaded with your specialty's guidelines, your preferred references, and instructions for how to respond. Share it with colleagues, gather feedback, iterate.
Level 3: Clinical Adjacent, Supervised
These projects touch clinical workflows but maintain human oversight at every step.
- Prior Auth Draft Helper: Build a system that takes a clinical scenario and drafts prior authorization letters citing relevant guidelines. You review and send every letter; the AI accelerates drafting.
- Differential Diagnosis Expander: Create a tool that takes your initial differential and suggests additional considerations based on guidelines and literature. Use it as a cognitive forcing function, not a replacement for your thinking.
- Patient Message Draft Responder: Build a system that drafts responses to patient portal messages. Every message is reviewed by you before sending. Track where AI saves time and where it creates problems.
Review your institution's policies on AI use. Get appropriate approvals. Consider PHI/HIPAA implications. Document what you're doing and why. These projects should enhance your learning and potentially improve care—but only if done responsibly.
Your Next Steps (Pick One)
Don't try to do everything. The biggest mistake people make when finishing a course like this is creating a massive to-do list and then doing nothing because it's overwhelming. Pick one thing from this list and do it this week:
- Subscribe to one newsletter from the list above. Just one. Read it for a month before adding another.
- Build a Custom GPT for something you actually need—patient FAQs, differential diagnosis helper, literature summarizer, study guide generator.
- Try Claude Code or Cursor on a small project. A personal website. A simple tool. A script that automates something tedious. Something with low stakes where you can experiment.
- Identify your niche. What specific problem in your work would you most like AI to solve? Write it down in one sentence. Start researching how others are approaching it.
- Teach someone else what you learned here. Give a lunch talk. Write a blog post. Explain it to a colleague. Teaching is the best way to solidify your own understanding.
- Run a simple evaluation. Take an AI tool you're already using and create 10 challenging test cases from your practice. Document how it performs. Share your findings.
- Set up your first agent workflow. Connect your email to a language model using n8n or Zapier. Have it summarize your unread messages each morning. Experience what "AI that takes action" feels like.
Seriously—pick one. Finish it. Then pick another.
The 30-Day Challenge
If you want more structure, here's a simple 30-day challenge to maintain momentum:
- Days 1-7: Use AI every day for something in your professional life. Doesn't matter what—literature search, email drafting, note template, patient education. Just build the habit of reaching for AI tools.
- Days 8-14: Pick one tool and go deep. If it's ChatGPT, explore Custom GPTs. If it's Claude, try the Projects feature. If it's NotebookLM, load it with your most-used references. Understand what your chosen tool can really do.
- Days 15-21: Build something. A Custom GPT. A simple automation. A documentation template. Something tangible that you can point to and say "I made that."
- Days 22-28: Share what you built. Show a colleague. Post about it (anonymized, of course). Get feedback. Iterate.
- Days 29-30: Reflect and plan. What worked? What didn't? What do you want to learn next? Write it down.
This isn't about becoming an AI expert in 30 days. It's about building momentum that continues beyond any structured program.
Finding Your Community
Learning alone is harder than learning with others. Consider:
- Local interest groups: Does your hospital or medical society have an AI interest group? If not, start one. Even 3-4 people meeting monthly creates valuable accountability.
- Online communities: LinkedIn has active healthcare AI discussions. Find where your specialty's AI conversations happen.
- Conferences: AMIA (American Medical Informatics Association), HIMSS, and specialty-specific conferences increasingly feature AI tracks. These are good for seeing what's actually being deployed, not just what's being hyped.
- Industry connections: Vendors are often happy to demo their tools and discuss use cases. Take these conversations with appropriate skepticism, but they're a window into what's being built.
The people who will shape how AI transforms healthcare aren't the ones who read the most articles or took the most courses. They're the ones who started building, started experimenting, started teaching others, and started learning from their mistakes.
You have the foundation now. The rest is up to you.
All Resources
A consolidated list of everything mentioned above. These are starting points—you'll discover your own favorites as you go deeper.
Stay Current
Learn & Build
AI Evaluation
Developer Tools
AI Agents & Automation
The clinicians who will matter most in the next decade aren't necessarily the ones with the most AI knowledge. They're the ones who learned to think clearly about what AI should and shouldn't do, who maintained their clinical judgment while leveraging AI capabilities, and who helped their patients navigate an increasingly AI-augmented healthcare system.
You've started that journey. Keep going.