USING AI

AI-Powered Search

Answer engines, deep research, and knowing when to just Google it—a practical guide to the new landscape of finding information.

~30 min read Practical guide
Prioritize This!

If you take one thing from this topic: Match the tool to the task. Use Google for local info and quick facts. Use Perplexity for synthesized answers with citations. Use Deep Research for comprehensive literature review. None of them replace your clinical judgment—they accelerate your access to information.

Introduction: Search Has Split Into Three Lanes

For 25 years, "searching the internet" meant one thing: Google. Type keywords, scan blue links, click through to find what you need. That model is fragmenting rapidly, and the implications for how clinicians find information are profound.

Consider this scenario: A patient asks about a supplement they saw promoted on social media for "reducing inflammation." In the old model, you'd Google it, wade through sponsored results, scan a few pages, and piece together an answer. Now you have options: ask Perplexity for a synthesized answer with citations, use ChatGPT's deep research to get a comprehensive report, or check if Google's AI Overview gives you what you need instantly.

In 2025, you have three distinct approaches to finding information online:

Each has different strengths. Knowing which to reach for—and when to just Google it—will make you dramatically more efficient at finding the information you need. This module will help you navigate this new landscape with practical guidance for clinical scenarios.


Part 1: Answer Engines—Perplexity and the New Search

What Perplexity Is

Perplexity calls itself an "answer engine" rather than a search engine—and the distinction matters. Instead of returning a page of links for you to explore, it synthesizes information from multiple sources and gives you a direct answer with inline citations.

Ask Perplexity "What are the current first-line treatments for community-acquired pneumonia?" and you get a structured response citing UpToDate, IDSA guidelines, and recent literature—not ten blue links to sift through. Every factual claim includes a numbered citation you can click to verify.

The growth has been remarkable: Perplexity now handles 780 million monthly queries, up from 230 million just a year ago. With a $14 billion valuation and 15-30 million active monthly users, it's become a serious alternative to traditional search—particularly for research-oriented tasks.

How It Works

Perplexity combines real-time web search with large language models. When you ask a question:

  1. It searches the web for relevant sources (often 5-15 for quick search, many more for Pro)
  2. It reads and processes those sources, extracting relevant information
  3. It synthesizes an answer, citing each claim with numbered references
  4. It suggests follow-up questions based on related topics

The result is conversational research. You can ask follow-ups, refine your question, and drill deeper—all while seeing exactly where the information comes from. Unlike traditional search, where you're doing the synthesis mentally as you click through links, Perplexity does that cognitive work for you.

This shift matters because it changes the skill required. Instead of crafting keyword queries and evaluating source credibility from snippets, you're crafting natural language questions and evaluating whether the synthesis is accurate. The literacy required is different—not necessarily easier, but different.

Quick Search vs. Pro Search

Perplexity offers two modes:

Quick Search Pro Search
Fast (~1-2 seconds) Thorough (~30-60 seconds)
Searches a few sources Searches dozens of sources
Best for simple questions Best for complex research
Unlimited (free tier) 5/day free, 300/month with Pro ($20/mo)

For quick factual questions, Quick Search is fine. For anything requiring synthesis across multiple sources—literature review, differential diagnosis research, guideline comparisons—Pro Search is worth the wait.

Perplexity for Clinical Research

Perplexity is particularly useful for clinicians when you need to:

The University of Michigan Library specifically recommends Perplexity for finding current grey literature—think tank reports, institute white papers, professional society statements—that traditional academic databases often miss. It's also useful for tracking down peer-reviewed articles as a starting point for exploration, though it can't access full text behind paywalls.

Academic Sources

Perplexity can search academic literature specifically. Toggle the "Academic" focus to prioritize peer-reviewed sources. It won't replace PubMed for systematic searches, but it's excellent for quick literature exploration. For dedicated academic search, also consider Consensus (academic papers only) or Elicit (AI research assistant for literature review).

Practical Examples

Here's how Perplexity handles real clinical queries:

Example Query

"What is the current evidence for using metformin in prediabetes? Include recent studies and guideline recommendations."

Perplexity returns a structured response covering the DPP trial results, ADA guideline recommendations, recent meta-analyses on cardiovascular outcomes, and emerging research on metformin's effects beyond glucose—each claim with a citation. You can then ask follow-ups: "What about in patients over 65?" or "Any concerns with renal function?"

Compare this to Google, where the same query returns a mix of Mayo Clinic patient information, sponsored pharmacy results, and news articles—requiring you to click through multiple sources to piece together the same information.

Perplexity's Limitations

Despite its strengths, Perplexity has real limitations that matter for clinical use:

Always verify important clinical information against primary sources. Use Perplexity to find those sources quickly, not to replace reading them. The citations are the feature—click them, verify them, and use your clinical judgment to assess whether the synthesis accurately represents what the sources actually say.


Part 2: Google AI Overviews—AI Meets Traditional Search

What Changed

Google hasn't ceded search to Perplexity. Instead, it's added AI Overviews—AI-generated summaries that appear at the top of search results for many queries. These use Google's Gemini model to synthesize information from multiple sources, and they're now rolling out to over a billion users globally.

Search for "symptoms of pulmonary embolism" and you'll see an AI-generated summary before the traditional blue links—covering the classic presentation, risk factors, and when to seek emergency care. Google is trying to give you the best of both worlds: synthesized answers plus the ability to explore sources through the traditional link-based interface below.

This represents Google's acknowledgment that the search paradigm is shifting. Users increasingly want answers, not just links. But Google's approach is additive—AI Overviews enhance traditional search rather than replacing it. The blue links, ads, Maps integration, and vertical search features all remain.

How AI Overviews Compare to Perplexity

Feature Perplexity Google AI Overviews
Speed 1-2 seconds Near-instant (~0.5 sec)
Citations Inline, every claim Less clear, sources below
Follow-up questions Conversational New search required
Local/structured data Weak Excellent (Maps, Flights, etc.)
Error rate (2025 studies) ~13% ~26%

The key insight: Google AI Overviews are better integrated with Google's ecosystem (Maps, Flights, Shopping, Calendar), while Perplexity is better for pure research and synthesis. Google is faster and more seamless; Perplexity is more thorough and better cited.

The Accuracy Problem

The higher error rate for Google AI Overviews (26% vs. Perplexity's 13% in one study) deserves attention. Google's AI Overviews are optimized for speed and integration, not deep research. They can be confidently wrong in ways that are hard to detect without clicking through to sources.

For medical queries specifically, Google has implemented additional safeguards—AI Overviews for health topics often include disclaimers and may link directly to authoritative sources like Mayo Clinic or CDC. But these safeguards aren't perfect, and the convenience of a quick answer can discourage the verification that clinical questions demand.

When Google Still Wins

Despite all the AI advances, traditional Google search remains better for:

The rule of thumb: if you need to do something (book, buy, navigate, call), use Google. If you need to understand something, consider Perplexity. If you need to understand something comprehensively, use deep research.


Part 3: Deep Research—When You Need a Full Investigation

What Deep Research Is

Both Perplexity and the major chat models (ChatGPT, Claude, Gemini) now offer "deep research" modes that go far beyond quick searches. These features autonomously research a topic for minutes—reading dozens or hundreds of sources—then deliver comprehensive reports with citations throughout.

Think of it as having a research assistant who spends 15-30 minutes pulling together everything relevant on a topic, synthesizing it, and presenting it in a structured format. The AI iteratively searches, reads documents, reasons about what it's learned, identifies gaps, searches again, and refines its understanding—mimicking how a human expert might research an unfamiliar topic.

Perplexity's deep research, for example, achieved a 93.9% accuracy score on the SimpleQA benchmark—far exceeding standard search or quick responses. For complex topics that require synthesizing multiple perspectives, this thoroughness matters.

When Deep Research Makes Sense

Deep research isn't always the right choice—it takes minutes instead of seconds. But for certain tasks, the investment pays off:

How the Major Players Compare

Feature ChatGPT Deep Research Claude Web Search Gemini Deep Research Perplexity Deep Research
Research time 2-5 minutes 1-3 minutes 10-15 minutes 2-4 minutes
Sources searched ~25-50 ~250-400 ~60-100 ~50-100
Output length Long (20-40 pages) Moderate (5-10 pages) Very long (40-50 pages) Moderate (5-15 pages)
Strength Detailed, specific recommendations Synthesized insights, quality over quantity Comprehensive, integrates Google ecosystem Fast, well-cited
Weakness Limited access (30/month for Plus) Shorter reports Verbose, slow Can feel less thorough
Access Plus/Pro subscription Pro subscription Advanced subscription Pro subscription

Enabling Search in the Big Three

If you're using ChatGPT, Claude, or Gemini, here's how to make them search the web:

ChatGPT

Web search is automatic for many queries. For deep research, look for the "Deep Research" option when starting a new chat, or explicitly ask: "Search the web and give me a comprehensive report on X." Requires Plus ($20/mo) for full access; free users get limited searches.

Claude

Claude can search the web when you ask it to. Say "Search for recent research on X" or enable web search in your conversation. The feature synthesizes across many sources and cites them. Requires Pro subscription ($20/mo).

Gemini

Gemini searches Google automatically for relevant queries. For Deep Research, select the feature from the model dropdown (requires Advanced subscription at $20/mo). Deep Research can also access your Gmail, Drive, and Docs for personalized research.

With Search vs. Without Search

Understanding when models are searching matters for interpreting their answers. By default, ChatGPT, Claude, and Gemini respond from their training data—they're not searching the web unless you specifically ask them to or enable search features.

Without Web Search With Web Search
Limited to training data (knowledge cutoff) Access to current information
Can't cite specific sources Provides citations to verify
May be outdated on recent topics Finds recent publications and news
Faster responses Takes longer but more comprehensive
Good for reasoning, writing, coding Essential for current events, research

For clinical questions where currency matters—new guidelines, recent studies, drug interactions—always ensure search is enabled or use a dedicated search tool. A question like "What's the current recommendation for aspirin in primary prevention?" will get different answers depending on whether the model is searching or relying on potentially outdated training data.

Know What You're Getting

When you ask ChatGPT or Claude a clinical question, check whether they're citing sources. If there are no citations, the answer is coming from training data, not current search. For anything where guidelines or evidence have changed recently, that distinction matters.


Part 4: Choosing the Right Tool

A Decision Framework

Use Google When...

  • You need local information (pharmacy, restaurant, directions)
  • You want to find a specific website
  • You need real-time data (flights, weather, scores)
  • You want to buy something
  • You need images or videos

Use Perplexity When...

  • You want a synthesized answer, not links
  • You need citations for each claim
  • You're doing quick literature exploration
  • You want to ask follow-up questions
  • You're researching a clinical question

Use Deep Research When...

  • You need comprehensive coverage of a topic
  • You're preparing a presentation or report
  • You want multiple perspectives synthesized
  • You have 5-15 minutes for the AI to work
  • The topic is complex and multi-faceted

Use PubMed/Specialized Databases When...

  • You need systematic, reproducible searches
  • You're writing for peer review
  • You need specific study types (RCTs, meta-analyses)
  • Comprehensiveness is critical
  • You need full-text access through your institution

Clinical Scenarios

Here's how this plays out in practice:

Scenario: Patient asks about a supplement they saw on TikTok
Best tool: Perplexity Quick Search
Why: Fast synthesis with citations you can share. Ask: "What is the evidence for [supplement] for [claimed benefit]? Include any safety concerns or drug interactions."
Scenario: Preparing a grand rounds presentation on new diabetes medications
Best tool: ChatGPT or Gemini Deep Research
Why: Comprehensive coverage, multiple sources, structured output. Request: "Create a comprehensive overview of GLP-1 agonists and SGLT2 inhibitors, including mechanism, major trials, current guidelines, and emerging applications."
Scenario: Finding the nearest 24-hour pharmacy for a patient
Best tool: Google
Why: Local data, maps integration, real-time hours. No AI synthesis needed—you need current, local, actionable information.
Scenario: Systematic review for a research paper
Best tool: PubMed + traditional databases
Why: Reproducible, comprehensive, meets publication standards. AI search can help identify keywords and scope, but formal systematic reviews require documented, reproducible search strategies.
Scenario: Quick refresher on a condition before seeing a patient
Best tool: Perplexity Pro Search or CDS tools
Why: Authoritative summary with citations in under a minute. For established conditions, UpToDate or DynaMed may still be faster and more reliable.
Scenario: Patient brings in printout from ChatGPT about their condition
Best tool: Perplexity (Academic focus) to verify
Why: Quickly verify claims with current literature. Ask: "What does current evidence say about [specific claim from patient's printout]?"
Scenario: Need to understand a new drug you haven't prescribed before
Best tool: Start with Perplexity, verify with official prescribing information
Why: Perplexity can give you a quick orientation, but always verify dosing, contraindications, and interactions against official FDA labeling.

Part 5: Verification—The Non-Negotiable Step

No matter which tool you use, verification remains essential. AI search tools make finding information faster, but they don't make it automatically accurate. The speed and convenience can actually make verification more important, not less— because the barrier to acting on unverified information has never been lower.

Why Verification Still Matters

A 2025 Frontiers in Digital Health study compared ChatGPT, Gemini, Copilot, Claude, and Perplexity on clinical practice guideline questions for lumbosacral radicular pain. The finding: no AI chatbot provided advice in absolute agreement with CPGs. All had errors, omissions, or outdated information.

This finding is consistent across studies. AI tools can synthesize information effectively, but they lack the clinical context to know when guidelines don't apply, when patient-specific factors change the calculus, or when the latest evidence hasn't yet been incorporated into their sources.

This doesn't mean the tools aren't useful—they dramatically accelerate finding relevant information. But they're research accelerants, not replacements for clinical judgment. Use them to find information faster, then apply your expertise to evaluate and contextualize that information.

Verification Checklist

For Patient-Facing Information

If you're going to share AI-generated information with patients or use it for clinical decisions, always verify against authoritative sources first. The 13-26% error rates in current studies are too high for unverified clinical use. Patients trust that you've vetted the information you share—make sure that trust is warranted.

Building a Verification Habit

The challenge isn't knowing that verification matters—it's actually doing it consistently when the AI-generated answer seems reasonable and you're pressed for time. Some strategies:


Resources

Essential Reading

Head-to-head comparison with specific test queries and accuracy analysis.
Detailed comparison of ChatGPT, Claude, Gemini, Grok, and Perplexity deep research features.
Comprehensive guide to choosing between traditional and AI-powered search.
2025 study comparing ChatGPT, Gemini, Copilot, Claude, and Perplexity on clinical accuracy.
Academic librarian guidance on using Perplexity and other AI tools for research.

Tools to Try

Perplexity
perplexity.ai — Answer engine with citations
Consensus
consensus.app — AI search for academic papers only
Elicit
elicit.com — AI research assistant for literature review
You.com
you.com — AI search with multiple modes

Podcasts & Videos

3-hour deep dive with Perplexity's CEO on the future of search and knowledge.
Hard Fork (NYT): "Is This the End of Google Search?"
Kevin Roose and Casey Newton discuss the AI search disruption.
Official introduction to Perplexity's deep research feature.

Learning Objectives

  • Distinguish between traditional search, answer engines, and deep research tools
  • Use Perplexity effectively for clinical research questions
  • Enable and use web search in ChatGPT, Claude, and Gemini
  • Choose the appropriate search tool for different clinical scenarios
  • Verify AI-generated search results before clinical use