USING AI

How AI Talks to Your Tools

APIs, MCP, and the plumbing that lets AI do more than chat—explained through the language of consults and referrals.

~15 min read Integrations
The Question

When your AI assistant pulls up a drug interaction, checks your calendar, or searches PubMed—how does it actually do that? It turns out the answer looks a lot like something you already understand: the consult and referral.

The Consult/Referral Model

Every day in clinical practice, you send structured requests to specialists. A referral has a specific format: patient demographics, clinical question, relevant history, and what you need back. The specialist has a defined scope of practice, processes your request, and returns a structured response.

AI integrations work exactly the same way. When ChatGPT searches the web or Claude reads a PDF, there's a structured request going out, a specialized service processing it, and a structured response coming back. The AI model itself doesn't "know" how to search the web any more than you personally run the MRI machine—it knows how to write the order and interpret the results.

This is the mental model that will carry you through the rest of this module: APIs are the referral system, tools and skills are the specialists, and MCP is the universal referral form.

APIs—The Referral

An API (Application Programming Interface) is a structured way for one piece of software to ask another for something specific. It's the referral form of the software world: a defined format, a specific recipient, and an expected response.

You already use dozens of systems that rely on APIs every day:

APIs have been around for decades. What's new isn't the concept—it's that AI models can now decide which APIs to call, compose the right request, and interpret the response without a human writing the code each time.

The Key Shift

Traditional software follows pre-programmed paths: if the user clicks this button, then call this API. AI models can reason about which API to call based on natural language. You say "check if this patient's insurance covers Ozempic" and the AI figures out which systems to query and in what order. That's a fundamental change in how software works.

Tools and Skills—The Specialists on Call

Once AI can make API calls, it needs to know which specialists are available and how to consult them. These packaged capabilities go by different names depending on the platform—tools, skills, plugins, extensions—but they all work the same way: a defined scope of practice, a declaration of what information is needed, and a structured response format.

Here are real examples of tools AI assistants can use today:

Think of each tool like a specialist with a defined scope of practice. A PubMed search tool declares: "I can search the medical literature. Give me a query and optional filters, and I'll return relevant articles with abstracts." The AI model reads that declaration, decides when to use it, and interprets the results—just like a PCP coordinating care across multiple specialists.

The AI Doesn't Actually "Know" How

This is an important distinction. When Claude searches PubMed, Claude isn't accessing a medical database stored in its training data. It's sending a real-time request to the National Library of Medicine's API and getting back real results. The model's job is to figure out when to search, what to search for, and how to interpret what comes back. The actual searching is done by the tool. This is why tool-assisted AI responses can be more current and verifiable than responses from the model's training data alone.

Skills: The Evolving Frontier

The concept of what a "tool" is keeps expanding. The latest evolution is skills—higher-level capabilities that chain together multiple tools, remember context, and follow complex workflows. If a tool is a specialist you can consult, a skill is more like a protocol or care pathway: a defined sequence of steps that coordinates multiple specialists toward a specific outcome.

For example, a "literature review" skill might chain together PubMed search, PDF retrieval, summarization, and citation formatting—all triggered by a single request. A "patient prep" skill could pull the schedule, review recent notes, check pending labs, and generate a pre-visit summary. The AI isn't just calling one tool; it's orchestrating a workflow.

Skills are evolving fast across platforms:

The terminology is still settling—what one platform calls a "skill," another calls a "workflow," "agent," or "custom GPT." The underlying concept is the same: packaging multi-step, multi-tool capabilities into reusable units that an AI can invoke. This is where AI stops being a chatbot and starts acting more like a capable assistant.

Tool Ecosystems and Marketplaces

As tools and skills become more useful, platforms have started building marketplaces where developers can publish them for others to use. OpenAI has the GPT Store. Various open-source projects have their own plugin registries and skill directories. This creates enormous potential—and, as we'll explore in the OpenClaw module, enormous risk. A tool marketplace is only as trustworthy as its vetting process, and many of these ecosystems are moving faster than their security reviews.

MCP—The Universal Referral Form

Here's the problem MCP solves. Before MCP, every AI platform had its own proprietary way of connecting to tools. Building an EHR integration for Claude was completely different from building one for ChatGPT, which was completely different from building one for Gemini. If you wanted your tool to work everywhere, you had to build and maintain separate integrations for each platform.

Sound familiar? It's the same problem healthcare faced before FHIR (Fast Healthcare Interoperability Resources). Every hospital system had its own data format. Transferring a patient record meant custom interfaces between every pair of systems. FHIR said: here's one standard format for exchanging health data, and now everyone speaks the same language.

MCP (Model Context Protocol) does the same thing for AI tools. Published by Anthropic in late 2024 as an open standard, MCP defines a single way to package a tool so that any AI model can use it. Build your PubMed integration once using MCP, and it works with Claude, ChatGPT, Gemini, and any other platform that supports the protocol.

FHIR for AI

FHIR standardized how health systems exchange patient data. MCP standardizes how AI systems connect to tools. Both solve the same fundamental problem: interoperability. And both matter because the alternative—proprietary integrations between every pair of systems—doesn't scale.

MCP Servers in the Wild

MCP has moved well past the whitepaper stage. Real MCP servers are running in production today, and the ecosystem is growing fast. Here's a sampling of what's available:

Notice the range. Some of these tools are read-only (searching PubMed), while others can take actions (sending emails, writing files, modifying databases). That distinction matters enormously for safety, and it's one you should keep in mind as AI tools get more capable.

How MCP Actually Works

The architecture is straightforward. An MCP server wraps around an existing service (like PubMed or your calendar) and exposes it in a standardized format. An MCP client is built into the AI application (Claude Desktop, for example) and knows how to discover and communicate with MCP servers. When you ask Claude to search PubMed:

  1. Claude recognizes that answering your question requires the PubMed tool
  2. The MCP client sends a standardized request to the PubMed MCP server
  3. The MCP server translates that into a real API call to the National Library of Medicine
  4. Results come back through the same standardized path
  5. Claude interprets the results and gives you a useful answer

The key insight: the AI model never talks directly to PubMed. The MCP server is a middleman that translates between the AI world and the service world. This creates a natural place for access controls, audit logs, and security checks—though as we'll see in the OpenClaw module, not everyone implements those safeguards.

Where MCP Is Heading

The MCP ecosystem is evolving rapidly. Active developments include:

This is infrastructure being built right now. The decisions being made about authentication, permissions, and audit trails in MCP today will shape how AI interacts with clinical systems for years to come.

Why This Matters for Clinicians

You might be thinking: "I don't need to know how the plumbing works. I just need to know if the tool is reliable." That's fair—and it's the same reasoning you apply to most technology in practice. You don't need to understand TCP/IP to use your EHR.

But there are three reasons this particular plumbing is worth understanding right now:

1. Actions Are Different from Answers

An AI that gives you a wrong answer is one kind of problem. An AI that does the wrong thing—sends the wrong message, modifies the wrong record, schedules the wrong appointment—is a fundamentally different kind of problem. As AI tools gain the ability to take actions through APIs, the stakes of errors change. Understanding that your AI assistant has tools that can read your calendar versus tools that can write to it changes how much supervision you should apply.

2. Data Flows Where Tools Flow

Every tool connection is a data pathway. When an AI assistant can access your email, your calendar, and your EHR, information from all three can flow through the AI model. This has profound implications for PHI. A question like "What did my patient with diabetes email about last week?" requires the AI to pull data from both clinical and communication systems—and the privacy considerations multiply.

3. The Ecosystem Is Growing Fast

MCP means the tools your AI can access will expand rapidly. Third parties can build integrations without needing permission from the AI vendor. This creates opportunity (a vibrant ecosystem of useful tools) and risk (a vibrant ecosystem of poorly vetted or actively malicious tools). As the next module will make painfully clear, tool marketplaces can become attack surfaces.

With Great Connectivity Comes Great Responsibility

Every tool connection is a potential attack surface. Every API call could carry PHI. Every skill marketplace could host malicious code. When evaluating AI products for your practice, ask:

  • What tools does this AI have access to? Can I see the list?
  • Which tools can read data versus write or act?
  • Where does data flow? What leaves my system?
  • Who built these integrations? How are they vetted?
  • Can I disable tools I don't need?

Further Reading

Anthropic · Official MCP specification and guides
Anthropic · November 2024 · Original announcement
HL7 International · The healthcare interoperability standard that MCP parallels
Andreessen Horowitz · How AI agents are evolving beyond chatbots
GitHub · Open-source directory of available MCP server implementations

Reflection Questions

  1. What tools or integrations would be most valuable if your AI assistant could access them in your clinical workflow? What would you explicitly not want it to access?
  2. What's the difference between an AI that can look up your patient's chart versus one that can write to it? Where would you draw the line?
  3. How should we think about consent when AI tools access patient data through APIs? Is the current consent model for EHR data sufficient when AI is the one querying it?
  4. The FHIR analogy suggests MCP could become critical healthcare infrastructure. What governance should exist around AI tool standards that touch clinical data?