NEXT STEPS

So...What Next?

You've built a foundation. Now it's time to find your path, keep learning, and start building things that matter.

~25 min read Action-Oriented
The Most Important Thing

Start. Today. Don't wait until you feel ready. Don't wait until you've read one more article or taken one more course. The best way to learn AI is to use AI. Build something small. Break it. Fix it. Build something else. The people who will thrive in an AI-augmented world are the ones who started before they felt qualified.

Congratulations—Now the Real Work Begins

If you've made it through this curriculum, you now understand more about AI in healthcare than most of your colleagues. You know how language models think (and fail to think). You understand the privacy landscape. You can evaluate clinical AI tools critically. You've seen the promise and pitfalls of AI-powered documentation and decision support.

But here's the uncomfortable truth: everything you just learned has a half-life.

The field is moving at a pace that makes even experts feel like perpetual beginners. GPT-4 was released in March 2023. By the time you read this, we'll be multiple generations beyond it—and whatever comes next will probably surprise us. The specific tools, the specific capabilities, the specific limitations—all of these are shifting under our feet.

So what's the point of learning any of this if it's going to change?

Because the fundamentals transfer. Critical evaluation skills don't expire. Understanding how to supervise AI outputs works across generations. Knowing how to think about privacy, bias, and clinical integration—these are durable skills even as the technology evolves.

Your job now isn't to "finish" learning about AI. It's to build habits for continuous learning so you can evolve alongside the technology.

Accept the Speed (Or Get Left Behind)

Let's be honest about what you're up against. In 2025 alone:

And that's just the major releases from major players. The startup ecosystem, the research labs, the open-source community—all moving at similar velocity.

You can't keep up with everything. No one can. The goal isn't comprehensive knowledge—it's developing an efficient information diet and learning how to quickly evaluate what matters for your work.

Build Your Information Diet

You need reliable sources that filter the noise without missing the signal. The resources below are some of my favorites—they're a good place to start, but you'll find new ones you love as you develop your own information diet.

Product management and tech strategy with increasingly strong AI coverage. Good for understanding how companies are actually deploying these tools.
Accessible, practical AI updates without the hype. Good for clinicians who want to understand what's happening without drowning in technical details.
Quick daily updates on AI developments. Skim it in 5 minutes with your morning coffee.
Thoughtful, balanced analysis of AI developments. Goes deeper than headlines without requiring a CS degree.
Conversations with people actually using AI in their work. Practical insights over theoretical discussions.

Subscribe to 2-3 of these. Skim them weekly. Most items won't be relevant to you—that's fine. You're looking for the occasional piece that changes how you think or points to something you should try.

The Lenny's Pass: A Shortcut Worth Knowing About

One resource deserves special mention. Lenny's Newsletter offers something called "Lenny's Pass" (or "Product Pass") to annual subscribers that's genuinely remarkable for anyone wanting to experiment with AI tools.

For $200-350/year, annual subscribers get free year-long access to many of the most popular AI-powered tools in the world—tools that would cost thousands of dollars individually:

The catch: you need to be a new customer of each product to get the free year, and some popular tools sell out. The $350 "Insider" tier guarantees access to everything.

This isn't an endorsement—it's a practical observation. If you're planning to experiment with AI development tools anyway, this bundle could save you significant money while giving you access to experiment with multiple platforms. The newsletter itself is also genuinely excellent for understanding how tech companies think about product development and AI integration.

Find Your Niche

Here's the liberating secret: you don't need to be an AI generalist.

The person who becomes known as "the one who figured out how to use AI for [specific clinical workflow]" will be more valuable than someone with broad but shallow AI knowledge. Depth beats breadth, especially in a field this complex.

Think about what frustrates you most in your daily work:

Pick one area. Spend 90% of your AI learning time there. Become the expert in that niche. Then expand.

The Specialist Advantage

A pediatric hospitalist who deeply understands AI documentation will contribute more to their field than an administrator who superficially understands all AI applications. Your clinical expertise is the differentiator—combine it with focused AI knowledge and you become genuinely valuable.

Learning Resources: From Foundations to Frontiers

The foundation you've built here should make everything below more accessible. Pick based on your goals.

For Deepening Technical Understanding

Andrew Ng's courses remain the gold standard for understanding AI fundamentals. Start with "AI for Everyone" if you want conceptual grounding, or "Generative AI for Everyone" for LLM-specific content.
Extensive AI course catalog. Look for Stanford's machine learning courses or specialized healthcare AI offerings.
AI safety and alignment courses. If you're concerned about the long-term trajectory of AI (and you should be), this is essential reading.

For Learning to Evaluate AI Systems: The Critical Skill

Evals—short for evaluations—is the emerging discipline of rigorously testing what AI systems can and can't do. This might sound academic, but it's arguably the most important skill for anyone deploying AI in clinical settings. Here's why.

When a vendor tells you their AI is "95% accurate," what does that actually mean? Accurate at what? On what population? Measured how? Under what conditions? The answers to these questions determine whether the tool will help or harm your patients.

Consider a simple example: An AI scribe claims "95% accuracy" in generating clinical notes. But what if:

Understanding evals means understanding what questions to ask, how to interpret the answers, and—increasingly—how to run your own evaluations before deploying AI in your practice.

The Eval Mindset

Think like a skeptical scientist, not a hopeful adopter. Every AI claim should trigger the question: "How would I test that?" The clinicians who will navigate AI safely are those who habitually ask for evidence and understand what that evidence actually shows.

Core Concepts in AI Evaluation

You don't need to become an evaluation scientist, but understanding a few key concepts will make you a much more sophisticated consumer of AI tools:

Learning to Run Your Own Evals

The definitive course on AI evaluation methodology. Learn how to design test sets, measure performance, and identify failure modes. Taught by Hamel Husain, who built evaluation frameworks at major AI companies.
A free introduction to why evals matter and how to think about them. Start here before committing to the full course.
How Anthropic thinks about evaluating Claude. Gives insight into what sophisticated AI evaluation looks like at the frontier.

The practical application: Before deploying any AI tool in your practice, create a test set of challenging cases from your own patient population. Run the AI on these cases. Have colleagues blindly evaluate the outputs. Document failures. This isn't academic—it's basic due diligence that most healthcare organizations skip.

If You Want to Build: Developer Foundations

Vibe coding is real and powerful—but the most capable builders understand at least the basics of what's happening under the hood. You don't need to become a software engineer, but some foundational skills will multiply your effectiveness dramatically.

Think of it like learning to drive. You don't need to be a mechanic, but understanding that the car needs gas, how brakes work, and what that warning light means makes you a much better driver. The same applies to building with AI.

Git: Your Safety Net

Git is version control—it lets you save snapshots of your work and roll back when things break. Every serious software project uses it. Even if you're vibe coding, understanding Git means:

GitHub (the most popular Git hosting service) also gives you free website hosting, issue tracking, and collaboration tools. Learning the basics takes an afternoon and pays dividends forever.

The best free introduction to Git and GitHub. Interactive, well-paced, and you'll have a working repository by the end.
A comprehensive video tutorial if you prefer watching to reading. About an hour, covers everything you need.
The Five Git Commands You Actually Need

git status — See what's changed
git add . — Stage all changes
git commit -m "message" — Save a snapshot with a description
git push — Upload to GitHub
git pull — Download changes from GitHub

That's it. You can learn more commands later, but these five will handle 90% of what you need.

The Command Line: Where Power Lives

The command line (terminal, shell, CLI—same thing) feels intimidating but is simpler than you think. It's just typing commands instead of clicking buttons. Why bother?

How to open the terminal: On Mac, search for "Terminal" in Spotlight (Cmd+Space) or find it in Applications → Utilities. On Windows, search for "PowerShell" or "Windows Terminal" in the Start menu. You'll see a blank window with a blinking cursor—that's it. You're in.

The Commands You Actually Need

Navigation:
cd foldername — Change directory (go into a folder)
cd .. — Go up one level
cd ~ — Go to your home directory
ls — List files in current folder (dir on Windows)
pwd — Print working directory (shows where you are)

File operations:
mkdir foldername — Create a new folder
touch filename — Create a new empty file
cp file1 file2 — Copy a file
mv file1 file2 — Move or rename a file
rm filename — Delete a file (careful—no trash can!)

Useful extras:
clear — Clear the screen when it gets cluttered
history — See your recent commands
ctrl+c — Stop whatever's currently running

Open Terminal and try navigating around. You can't break anything just by looking.

Pro tip: When you're learning, have an AI chat window open alongside your terminal. When you get stuck or see an error, paste it into Claude or ChatGPT and ask what went wrong. This is how most developers actually work—nobody memorizes everything.

Free interactive course. About 4 hours total, but the first module (30 minutes) teaches you enough to be functional.
Comprehensive video tutorial for Mac/Linux. About an hour, covers everything you need and more.

APIs: How Software Talks to Software

An API (Application Programming Interface) is how different software systems communicate with each other. When you use ChatGPT through the website, you're using their interface. When you build your own tool that uses GPT, you use their API—a structured way to send requests and receive responses programmatically.

Understanding APIs matters because:

How APIs Work (The Simple Version)

Think of an API like a restaurant. You (the customer) don't go into the kitchen—you give your order to a waiter (the API), who brings it to the kitchen (the server), and returns with your food (the response). The menu tells you what you can order and how to order it (the documentation).

In practice, using an AI API means:

  1. You send a request containing your prompt and settings
  2. The AI processes it on their servers
  3. You receive a response containing the AI's output
  4. You pay based on usage (typically per token—roughly per word)

API Keys: Your Access Credential

To use an API, you need an API key—a long string of characters that identifies you and authorizes your requests. It's like a password that lets the service know who's making requests and who to bill.

Protect Your API Keys

API keys are sensitive. Anyone with your key can make requests and you'll be charged. Never share them publicly, don't commit them to GitHub, and don't paste them into chat windows. Store them securely—most developers use environment variables or secret management tools.

Getting API Keys from Major Providers

Go to platform.openai.com → Sign up or log in → API Keys → Create new secret key. You'll need to add a payment method. Pricing is pay-as-you-go based on tokens used. GPT-4o is roughly $2.50-10 per million tokens depending on input/output.
Go to console.anthropic.com → Sign up or log in → API Keys → Create Key. Requires payment method. Claude Sonnet is roughly $3-15 per million tokens. Anthropic's API is known for generous rate limits and reliable uptime.
Go to aistudio.google.com → Get API key → Create API key in new project. Google offers a generous free tier for experimentation. Gemini Flash is very cost-effective for high-volume use cases.

Your First API Call

Once you have an API key, the simplest way to test it is through each provider's "playground" or documentation. But if you want to use it programmatically, here's what a basic API call looks like in Python (the most common language for AI work):

# Example: Calling Claude API (simplified)
import anthropic

client = anthropic.Anthropic(api_key="your-api-key-here")

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Explain APIs in one paragraph."}
    ]
)

print(message.content)

You don't need to understand every line right now. The point is: it's not magic. You're sending a message, specifying which model to use, and getting a response back. AI coding assistants can help you write and modify code like this—you describe what you want, they handle the syntax.

Understanding How Web Apps Work

If you want to build tools that run in a browser (rather than just on your local machine), it helps to understand the basic architecture:

You don't need to master all of this—that's what AI coding assistants are for. But understanding the concepts helps you communicate with the AI and debug when things go wrong.

AI-Powered Development Tools

These are the tools that make building accessible to non-engineers. The first three deserve special attention—they're genuinely transformative, but that power comes with real responsibility.

Cursor: The AI Code Editor

The AI-native code editor. If you're going to build anything beyond a simple script, start here. It's VS Code with AI superpowers—code completion, chat about your codebase, and the ability to make changes across multiple files. The $20/month Pro tier is worth it.

Cursor is where most people should start. It looks and feels like VS Code (the most popular code editor), but with AI deeply integrated. You can highlight code and ask "what does this do?" You can describe a feature and watch it write the implementation. You can paste an error message and get a fix.

The "Composer" feature is particularly powerful—it can make coordinated changes across multiple files, understanding how your codebase fits together. For building real applications, this is transformative.

Power Requires Caution

Cursor can run commands, modify files, and execute code. When it's working well, this is magical. When it makes a mistake, it can break things quickly. Always use Git so you can roll back. Review changes before accepting them. Start with small projects where mistakes are cheap. The AI is confident even when it's wrong—your job is to verify.

Claude Code and Codex: AI in the Command Line

Claude Code (Anthropic)
Anthropic's command-line AI assistant. Lives in your terminal, can read your entire codebase, write and edit files, run commands, and handle complex multi-step tasks. Requires a Claude Pro or Max subscription.
Codex (OpenAI)
OpenAI's command-line coding agent. Similar capabilities to Claude Code—reads codebases, writes files, executes commands. Requires a ChatGPT Plus or Pro subscription.

These tools are more powerful than Cursor in some ways—they operate directly in your terminal, can run shell commands, and handle complex multi-step tasks autonomously. You describe what you want ("add user authentication to this app" or "find and fix the bug causing the test to fail"), and they figure out the steps: reading files, making changes, running tests, iterating until it works.

Both are excellent. Claude Code tends to be more thorough and careful; Codex can be faster and more aggressive. Try both and see which one matches how you think. Many developers use both depending on the task.

Autonomous Agents Can Do Real Damage

These tools can delete files, run commands, modify your system. They're operating with your permissions. A misunderstood instruction can cascade into significant damage before you notice. Always work in a Git repository. Always review what the agent is doing. Start with low-stakes projects. The productivity gains are real, but so is the risk of an AI confidently executing the wrong plan.

App Builders and Infrastructure

AI app builders that generate full applications from descriptions. Great for rapid prototyping and learning how modern web apps are structured. Bolt, Lovable, and Replit create full-stack apps; v0 (from Vercel) specializes in UI components.
An open-source backend platform that handles databases, authentication, file storage, and APIs. Makes building "real" applications dramatically easier. Free tier is generous enough for learning and small projects.
Deployment platforms that make putting your app on the internet trivially easy. Connect to your GitHub repository and they handle the rest. Free tiers are generous.

A Learning Path for the Ambitious

If you want to go beyond vibe coding to actually understanding what you're building:

  1. Week 1-2: Learn Git and GitHub basics. Create a repository, make commits, push to GitHub.
  2. Week 3-4: Get comfortable with the command line. Navigate, create files, run scripts.
  3. Week 5-6: Build something simple with Cursor or Bolt. A personal website, a to-do app, anything.
  4. Week 7-8: Learn basic HTML/CSS by customizing what the AI generated. Understand how to make changes yourself.
  5. Week 9-10: Add a backend with Supabase. User authentication, saving data, the basics of a real application.
  6. Week 11-12: Deploy to Vercel or Netlify. Put something on the internet that you built.

By the end of three months, you'll understand enough to have intelligent conversations with developers, evaluate technical proposals, and build increasingly sophisticated tools with AI assistance.

The Next Frontier: AI Agents

Everything we've discussed so far involves AI that responds to your prompts. You ask, it answers. You supervise, it executes. The emerging wave is fundamentally different: AI that takes action autonomously—agents that can browse the web, execute code, call APIs, send messages, and complete multi-step tasks without constant supervision.

This is where things get both exciting and genuinely concerning for healthcare.

What Are AI Agents, Really?

An AI agent is a system that can:

The key difference from a chatbot: agents operate in loops without waiting for human approval at each step. You give them a goal, and they figure out how to achieve it.

The Promise: Healthcare Scenarios

Imagine AI agents that could:

These aren't science fiction—early versions exist today. The question is whether we can deploy them safely.

The Peril: What Could Go Wrong

AI agents amplify both capabilities and risks:

The Autonomy Paradox

The more capable agents become, the less we can afford to supervise every action. But the more autonomous they become, the more important supervision is. Healthcare will need to develop new frameworks for "appropriate autonomy"—not the binary choice between "human does everything" and "AI does everything," but graduated trust based on task risk, agent capability, and verification mechanisms.

Current Agent Technologies

If you want to start experimenting with agents (in low-stakes settings), here's the landscape:

n8n
Open-source workflow automation that connects AI to your other tools. Build agents that trigger based on conditions, process data, and take actions across services. Self-hostable for privacy. Great starting point for understanding agent architectures.
The most popular framework for building AI agents. LangGraph specifically handles complex agent workflows with loops and decision points. More technical, but the industry standard.
Claude Computer Use / OpenAI Operator
AI that can control a computer—clicking buttons, filling forms, navigating websites. Still early but shows where things are heading. Anthropic's computer use is available; OpenAI's Operator is in preview.
MCP (Model Context Protocol)
Anthropic's open protocol for connecting AI to external tools and data sources. Not an agent framework itself, but the plumbing that makes sophisticated agents possible. Watch this space.

Getting Started Safely

If you want to experiment with agents, start with:

  1. Non-clinical tasks first: Build agents for scheduling, email triage, literature searches—anything where mistakes are annoying, not dangerous.
  2. Human-in-the-loop always (for now): Configure agents to propose actions rather than execute them. You approve, then the agent acts.
  3. Narrow scope: An agent that does one thing well is safer than a general-purpose agent that can do everything poorly.
  4. Audit logging: Every action the agent takes should be logged. You need to be able to reconstruct what happened and why.
  5. Kill switches: Always have a way to shut down an agent immediately. Automated monitoring that triggers alerts (and potentially automatic shutdown) when behavior seems anomalous.

The foundational knowledge you've built—especially around supervision, evaluation, bias, and governance—becomes even more critical as AI moves from "tool I use" to "system that acts on my behalf."

AI Browsers: A Glimpse of the Agentic Future

2025 saw the emergence of "agentic browsers"—web browsers with built-in AI that can take actions on your behalf. These are worth knowing about, even if they're not ready for clinical use.

Comet (Perplexity)
Free AI browser with built-in assistant that can navigate websites, summarize content, and perform multi-step tasks. Supports multiple LLMs (GPT, Claude, Gemini). Available on Mac, Windows, and Android.
Atlas (OpenAI)
ChatGPT integrated into a browser with "agent mode" that can shop, make reservations, fill forms, and complete multi-step web tasks. Full agent features require Plus/Pro subscription. Currently Mac only.

The appeal is obvious: tell the browser "find me the cheapest flight to Chicago next Tuesday" and watch it navigate multiple airline sites, compare prices, and present options. Or "summarize all the tabs I have open about this clinical trial."

Why AI Browsers Aren't Ready for Clinical Work

These tools are genuinely impressive for personal productivity, but they carry significant risks for anything involving patient information:

  • Prompt injection attacks: Malicious websites can embed hidden instructions that hijack the AI agent, potentially causing it to leak information, navigate to phishing sites, or take unintended actions. This is an unsolved security problem.
  • Data exposure: When an AI browser "reads" a page for you, that content may be processed by external servers. Browser memories and context features raise questions about what data is retained and how it's used.
  • Unpredictable behavior: Agent mode can click links, fill forms, and navigate autonomously. In a healthcare context, unintended actions could have serious consequences.
  • Audit gaps: Traditional browsers leave clear audit trails. Agentic browsers that take autonomous actions make it harder to reconstruct what happened and why.

The bottom line: AI browsers are fun to explore for personal use—booking travel, shopping, research, organizing information. They offer a real preview of where human-computer interaction is heading. But keep them far away from anything involving PHI or clinical decision-making until the security and privacy landscape matures significantly.

Your First Projects: Where to Start

Theory is necessary but insufficient. You need to build things. Here are concrete first projects organized by complexity and risk level:

Level 1: Zero Risk, High Learning

These projects involve only your own data or synthetic data. No patient information, no clinical decisions.

Level 2: Professional Context, Personal Use

These projects involve your professional life but not direct patient care. Still relatively low risk.

Level 3: Clinical Adjacent, Supervised

These projects touch clinical workflows but maintain human oversight at every step.

Before Any Clinical-Adjacent Project

Review your institution's policies on AI use. Get appropriate approvals. Consider PHI/HIPAA implications. Document what you're doing and why. These projects should enhance your learning and potentially improve care—but only if done responsibly.

Your Next Steps (Pick One)

Don't try to do everything. The biggest mistake people make when finishing a course like this is creating a massive to-do list and then doing nothing because it's overwhelming. Pick one thing from this list and do it this week:

Immediate Actions (Pick ONE)
  1. Subscribe to one newsletter from the list above. Just one. Read it for a month before adding another.
  2. Build a Custom GPT for something you actually need—patient FAQs, differential diagnosis helper, literature summarizer, study guide generator.
  3. Try Claude Code or Cursor on a small project. A personal website. A simple tool. A script that automates something tedious. Something with low stakes where you can experiment.
  4. Identify your niche. What specific problem in your work would you most like AI to solve? Write it down in one sentence. Start researching how others are approaching it.
  5. Teach someone else what you learned here. Give a lunch talk. Write a blog post. Explain it to a colleague. Teaching is the best way to solidify your own understanding.
  6. Run a simple evaluation. Take an AI tool you're already using and create 10 challenging test cases from your practice. Document how it performs. Share your findings.
  7. Set up your first agent workflow. Connect your email to a language model using n8n or Zapier. Have it summarize your unread messages each morning. Experience what "AI that takes action" feels like.

Seriously—pick one. Finish it. Then pick another.

The 30-Day Challenge

If you want more structure, here's a simple 30-day challenge to maintain momentum:

This isn't about becoming an AI expert in 30 days. It's about building momentum that continues beyond any structured program.

Finding Your Community

Learning alone is harder than learning with others. Consider:

The people who will shape how AI transforms healthcare aren't the ones who read the most articles or took the most courses. They're the ones who started building, started experimenting, started teaching others, and started learning from their mistakes.

You have the foundation now. The rest is up to you.


All Resources

A consolidated list of everything mentioned above. These are starting points—you'll discover your own favorites as you go deeper.

Stay Current

Product management and tech strategy with strong AI coverage
Accessible, practical AI updates
Quick daily updates
Thoughtful video analysis
Practical use cases

Learn & Build

Andrew Ng's AI courses
Extensive AI course catalog
AI safety courses

AI Evaluation

Comprehensive course on evaluation methodology
Free introduction to evaluation thinking
How frontier AI companies evaluate their models

Developer Tools

AI-native code editor
Open-source backend platform
Bundle of AI tools with newsletter subscription

AI Agents & Automation

n8n
Open-source workflow automation
Agent development framework

A Final Thought

The clinicians who will matter most in the next decade aren't necessarily the ones with the most AI knowledge. They're the ones who learned to think clearly about what AI should and shouldn't do, who maintained their clinical judgment while leveraging AI capabilities, and who helped their patients navigate an increasingly AI-augmented healthcare system.

You've started that journey. Keep going.