AI-Powered Search
Answer engines, deep research, and knowing when to just Google it—a practical guide to the new landscape of finding information.
If you take one thing from this topic: Match the tool to the task. Use Google for local info and quick facts. Use Perplexity for synthesized answers with citations. Use Deep Research for comprehensive literature review. None of them replace your clinical judgment—they accelerate your access to information.
Introduction: Search Has Split Into Three Lanes
For 25 years, "searching the internet" meant one thing: Google. Type keywords, scan blue links, click through to find what you need. That model is fragmenting rapidly, and the implications for how clinicians find information are profound.
Consider this scenario: A patient asks about a supplement they saw promoted on social media for "reducing inflammation." In the old model, you'd Google it, wade through sponsored results, scan a few pages, and piece together an answer. Now you have options: ask Perplexity for a synthesized answer with citations, use ChatGPT's deep research to get a comprehensive report, or check if Google's AI Overview gives you what you need instantly.
In 2025, you have three distinct approaches to finding information online:
- Traditional search (Google, Bing): Returns links for you to explore—still dominant with 89.7% market share
- Answer engines (Perplexity, Google AI Overviews): Synthesize information and give you answers with citations
- Deep research (ChatGPT, Claude, Gemini, Perplexity Pro): Spend minutes autonomously researching, then deliver comprehensive reports
Each has different strengths. Knowing which to reach for—and when to just Google it—will make you dramatically more efficient at finding the information you need. This module will help you navigate this new landscape with practical guidance for clinical scenarios.
Part 1: Answer Engines—Perplexity and the New Search
What Perplexity Is
Perplexity calls itself an "answer engine" rather than a search engine—and the distinction matters. Instead of returning a page of links for you to explore, it synthesizes information from multiple sources and gives you a direct answer with inline citations.
Ask Perplexity "What are the current first-line treatments for community-acquired pneumonia?" and you get a structured response citing UpToDate, IDSA guidelines, and recent literature—not ten blue links to sift through. Every factual claim includes a numbered citation you can click to verify.
The growth has been remarkable: Perplexity now handles 780 million monthly queries, up from 230 million just a year ago. With a $14 billion valuation and 15-30 million active monthly users, it's become a serious alternative to traditional search—particularly for research-oriented tasks.
How It Works
Perplexity combines real-time web search with large language models. When you ask a question:
- It searches the web for relevant sources (often 5-15 for quick search, many more for Pro)
- It reads and processes those sources, extracting relevant information
- It synthesizes an answer, citing each claim with numbered references
- It suggests follow-up questions based on related topics
The result is conversational research. You can ask follow-ups, refine your question, and drill deeper—all while seeing exactly where the information comes from. Unlike traditional search, where you're doing the synthesis mentally as you click through links, Perplexity does that cognitive work for you.
This shift matters because it changes the skill required. Instead of crafting keyword queries and evaluating source credibility from snippets, you're crafting natural language questions and evaluating whether the synthesis is accurate. The literacy required is different—not necessarily easier, but different.
Quick Search vs. Pro Search
Perplexity offers two modes:
| Quick Search | Pro Search |
|---|---|
| Fast (~1-2 seconds) | Thorough (~30-60 seconds) |
| Searches a few sources | Searches dozens of sources |
| Best for simple questions | Best for complex research |
| Unlimited (free tier) | 5/day free, 300/month with Pro ($20/mo) |
For quick factual questions, Quick Search is fine. For anything requiring synthesis across multiple sources—literature review, differential diagnosis research, guideline comparisons—Pro Search is worth the wait.
Perplexity for Clinical Research
Perplexity is particularly useful for clinicians when you need to:
- Stay current: "What's new in GLP-1 agonist research in 2025?"
- Find guidelines: "Current AHA recommendations for anticoagulation in AFib"
- Research patient questions: "Safety of turmeric supplements with blood thinners"
- Explore differentials: "Causes of elevated ferritin with normal iron studies"
- Find grey literature: Institute reports, white papers, professional society statements
- Compare treatments: "Head-to-head trials comparing SGLT2 inhibitors"
The University of Michigan Library specifically recommends Perplexity for finding current grey literature—think tank reports, institute white papers, professional society statements—that traditional academic databases often miss. It's also useful for tracking down peer-reviewed articles as a starting point for exploration, though it can't access full text behind paywalls.
Perplexity can search academic literature specifically. Toggle the "Academic" focus to prioritize peer-reviewed sources. It won't replace PubMed for systematic searches, but it's excellent for quick literature exploration. For dedicated academic search, also consider Consensus (academic papers only) or Elicit (AI research assistant for literature review).
Practical Examples
Here's how Perplexity handles real clinical queries:
"What is the current evidence for using metformin in prediabetes? Include recent studies and guideline recommendations."
Perplexity returns a structured response covering the DPP trial results, ADA guideline recommendations, recent meta-analyses on cardiovascular outcomes, and emerging research on metformin's effects beyond glucose—each claim with a citation. You can then ask follow-ups: "What about in patients over 65?" or "Any concerns with renal function?"
Compare this to Google, where the same query returns a mix of Mayo Clinic patient information, sponsored pharmacy results, and news articles—requiring you to click through multiple sources to piece together the same information.
Perplexity's Limitations
Despite its strengths, Perplexity has real limitations that matter for clinical use:
- Still hallucinates: A 2025 Wordstream study found a 13% error rate (better than Google AI Overviews at 26%, but not zero). That's roughly 1 in 8 responses containing some inaccuracy.
- Source quality varies: It sometimes cites sources that plagiarize other articles, or relies on outdated information. One analysis found Perplexity occasionally citing the same underlying text multiple times through different websites, creating an illusion of corroboration.
- Struggles with structured data: Restaurant hours, local business info, real-time schedules—Google is still better. Ask Perplexity for the nearest open pharmacy and you'll often get a generic answer rather than actual local results.
- Can't access paywalled content: Won't see full text of most journal articles without institutional access. It can find that a study exists and summarize the abstract, but can't read the full methodology or results sections.
- Time-sensitive information: For rapidly evolving topics, Perplexity sometimes uses outdated or less trustworthy sources, particularly when recent high-quality sources haven't been indexed yet.
Always verify important clinical information against primary sources. Use Perplexity to find those sources quickly, not to replace reading them. The citations are the feature—click them, verify them, and use your clinical judgment to assess whether the synthesis accurately represents what the sources actually say.
Part 2: Google AI Overviews—AI Meets Traditional Search
What Changed
Google hasn't ceded search to Perplexity. Instead, it's added AI Overviews—AI-generated summaries that appear at the top of search results for many queries. These use Google's Gemini model to synthesize information from multiple sources, and they're now rolling out to over a billion users globally.
Search for "symptoms of pulmonary embolism" and you'll see an AI-generated summary before the traditional blue links—covering the classic presentation, risk factors, and when to seek emergency care. Google is trying to give you the best of both worlds: synthesized answers plus the ability to explore sources through the traditional link-based interface below.
This represents Google's acknowledgment that the search paradigm is shifting. Users increasingly want answers, not just links. But Google's approach is additive—AI Overviews enhance traditional search rather than replacing it. The blue links, ads, Maps integration, and vertical search features all remain.
How AI Overviews Compare to Perplexity
| Feature | Perplexity | Google AI Overviews |
|---|---|---|
| Speed | 1-2 seconds | Near-instant (~0.5 sec) |
| Citations | Inline, every claim | Less clear, sources below |
| Follow-up questions | Conversational | New search required |
| Local/structured data | Weak | Excellent (Maps, Flights, etc.) |
| Error rate (2025 studies) | ~13% | ~26% |
The key insight: Google AI Overviews are better integrated with Google's ecosystem (Maps, Flights, Shopping, Calendar), while Perplexity is better for pure research and synthesis. Google is faster and more seamless; Perplexity is more thorough and better cited.
The Accuracy Problem
The higher error rate for Google AI Overviews (26% vs. Perplexity's 13% in one study) deserves attention. Google's AI Overviews are optimized for speed and integration, not deep research. They can be confidently wrong in ways that are hard to detect without clicking through to sources.
For medical queries specifically, Google has implemented additional safeguards—AI Overviews for health topics often include disclaimers and may link directly to authoritative sources like Mayo Clinic or CDC. But these safeguards aren't perfect, and the convenience of a quick answer can discourage the verification that clinical questions demand.
When Google Still Wins
Despite all the AI advances, traditional Google search remains better for:
- Local information: "Pharmacy near me open now"—Google's Maps integration with real-time hours is unmatched
- Real-time data: Flight status, stock prices, sports scores—structured data that updates continuously
- Navigation: Directions, business hours, contact info—tap to call, tap to navigate
- Shopping: Price comparisons, product availability, reviews—vertical search with purchase intent
- Specific websites: When you need a particular site, not a summary— "UpToDate atrial fibrillation" gets you there faster than asking for a summary
- Images and videos: Finding specific visual content, medical imaging examples, procedure videos
The rule of thumb: if you need to do something (book, buy, navigate, call), use Google. If you need to understand something, consider Perplexity. If you need to understand something comprehensively, use deep research.
Part 3: Deep Research—When You Need a Full Investigation
What Deep Research Is
Both Perplexity and the major chat models (ChatGPT, Claude, Gemini) now offer "deep research" modes that go far beyond quick searches. These features autonomously research a topic for minutes—reading dozens or hundreds of sources—then deliver comprehensive reports with citations throughout.
Think of it as having a research assistant who spends 15-30 minutes pulling together everything relevant on a topic, synthesizing it, and presenting it in a structured format. The AI iteratively searches, reads documents, reasons about what it's learned, identifies gaps, searches again, and refines its understanding—mimicking how a human expert might research an unfamiliar topic.
Perplexity's deep research, for example, achieved a 93.9% accuracy score on the SimpleQA benchmark—far exceeding standard search or quick responses. For complex topics that require synthesizing multiple perspectives, this thoroughness matters.
When Deep Research Makes Sense
Deep research isn't always the right choice—it takes minutes instead of seconds. But for certain tasks, the investment pays off:
- Preparing presentations: Grand rounds, case conferences, educational talks
- Complex clinical questions: When you need multiple perspectives and comprehensive coverage
- Staying current on a topic: "What's changed in heart failure management in the past year?"
- Understanding controversies: When expert opinion is divided and you need to understand why
- Writing or research: Review articles, grant backgrounds, policy documents
How the Major Players Compare
| Feature | ChatGPT Deep Research | Claude Web Search | Gemini Deep Research | Perplexity Deep Research |
|---|---|---|---|---|
| Research time | 2-5 minutes | 1-3 minutes | 10-15 minutes | 2-4 minutes |
| Sources searched | ~25-50 | ~250-400 | ~60-100 | ~50-100 |
| Output length | Long (20-40 pages) | Moderate (5-10 pages) | Very long (40-50 pages) | Moderate (5-15 pages) |
| Strength | Detailed, specific recommendations | Synthesized insights, quality over quantity | Comprehensive, integrates Google ecosystem | Fast, well-cited |
| Weakness | Limited access (30/month for Plus) | Shorter reports | Verbose, slow | Can feel less thorough |
| Access | Plus/Pro subscription | Pro subscription | Advanced subscription | Pro subscription |
Enabling Search in the Big Three
If you're using ChatGPT, Claude, or Gemini, here's how to make them search the web:
ChatGPT
Web search is automatic for many queries. For deep research, look for the "Deep Research" option when starting a new chat, or explicitly ask: "Search the web and give me a comprehensive report on X." Requires Plus ($20/mo) for full access; free users get limited searches.
Claude
Claude can search the web when you ask it to. Say "Search for recent research on X" or enable web search in your conversation. The feature synthesizes across many sources and cites them. Requires Pro subscription ($20/mo).
Gemini
Gemini searches Google automatically for relevant queries. For Deep Research, select the feature from the model dropdown (requires Advanced subscription at $20/mo). Deep Research can also access your Gmail, Drive, and Docs for personalized research.
With Search vs. Without Search
Understanding when models are searching matters for interpreting their answers. By default, ChatGPT, Claude, and Gemini respond from their training data—they're not searching the web unless you specifically ask them to or enable search features.
| Without Web Search | With Web Search |
|---|---|
| Limited to training data (knowledge cutoff) | Access to current information |
| Can't cite specific sources | Provides citations to verify |
| May be outdated on recent topics | Finds recent publications and news |
| Faster responses | Takes longer but more comprehensive |
| Good for reasoning, writing, coding | Essential for current events, research |
For clinical questions where currency matters—new guidelines, recent studies, drug interactions—always ensure search is enabled or use a dedicated search tool. A question like "What's the current recommendation for aspirin in primary prevention?" will get different answers depending on whether the model is searching or relying on potentially outdated training data.
When you ask ChatGPT or Claude a clinical question, check whether they're citing sources. If there are no citations, the answer is coming from training data, not current search. For anything where guidelines or evidence have changed recently, that distinction matters.
Part 4: Choosing the Right Tool
A Decision Framework
Use Google When...
- You need local information (pharmacy, restaurant, directions)
- You want to find a specific website
- You need real-time data (flights, weather, scores)
- You want to buy something
- You need images or videos
Use Perplexity When...
- You want a synthesized answer, not links
- You need citations for each claim
- You're doing quick literature exploration
- You want to ask follow-up questions
- You're researching a clinical question
Use Deep Research When...
- You need comprehensive coverage of a topic
- You're preparing a presentation or report
- You want multiple perspectives synthesized
- You have 5-15 minutes for the AI to work
- The topic is complex and multi-faceted
Use PubMed/Specialized Databases When...
- You need systematic, reproducible searches
- You're writing for peer review
- You need specific study types (RCTs, meta-analyses)
- Comprehensiveness is critical
- You need full-text access through your institution
Clinical Scenarios
Here's how this plays out in practice:
Best tool: Perplexity Quick Search
Why: Fast synthesis with citations you can share. Ask: "What is the evidence for [supplement] for [claimed benefit]? Include any safety concerns or drug interactions."
Best tool: ChatGPT or Gemini Deep Research
Why: Comprehensive coverage, multiple sources, structured output. Request: "Create a comprehensive overview of GLP-1 agonists and SGLT2 inhibitors, including mechanism, major trials, current guidelines, and emerging applications."
Best tool: Google
Why: Local data, maps integration, real-time hours. No AI synthesis needed—you need current, local, actionable information.
Best tool: PubMed + traditional databases
Why: Reproducible, comprehensive, meets publication standards. AI search can help identify keywords and scope, but formal systematic reviews require documented, reproducible search strategies.
Best tool: Perplexity Pro Search or CDS tools
Why: Authoritative summary with citations in under a minute. For established conditions, UpToDate or DynaMed may still be faster and more reliable.
Best tool: Perplexity (Academic focus) to verify
Why: Quickly verify claims with current literature. Ask: "What does current evidence say about [specific claim from patient's printout]?"
Best tool: Start with Perplexity, verify with official prescribing information
Why: Perplexity can give you a quick orientation, but always verify dosing, contraindications, and interactions against official FDA labeling.
Part 5: Verification—The Non-Negotiable Step
No matter which tool you use, verification remains essential. AI search tools make finding information faster, but they don't make it automatically accurate. The speed and convenience can actually make verification more important, not less— because the barrier to acting on unverified information has never been lower.
Why Verification Still Matters
A 2025 Frontiers in Digital Health study compared ChatGPT, Gemini, Copilot, Claude, and Perplexity on clinical practice guideline questions for lumbosacral radicular pain. The finding: no AI chatbot provided advice in absolute agreement with CPGs. All had errors, omissions, or outdated information.
This finding is consistent across studies. AI tools can synthesize information effectively, but they lack the clinical context to know when guidelines don't apply, when patient-specific factors change the calculus, or when the latest evidence hasn't yet been incorporated into their sources.
This doesn't mean the tools aren't useful—they dramatically accelerate finding relevant information. But they're research accelerants, not replacements for clinical judgment. Use them to find information faster, then apply your expertise to evaluate and contextualize that information.
Verification Checklist
- Check the citations: Click through to source documents. Are they saying what the AI claims? AI synthesis can subtly distort meaning, especially for nuanced recommendations.
- Check the dates: Is this based on current guidelines or outdated sources? A 2019 source might predate important trials that changed practice.
- Cross-reference: Does UpToDate, DynaMed, or another authoritative source agree? If there's disagreement, investigate why.
- Consider the source quality: Peer-reviewed journal vs. institutional website vs. blog post vs. forum? AI search tools don't always distinguish quality reliably.
- Trust your training: Does this align with what you know? If not, investigate further. Sometimes the AI is right and your knowledge is outdated—but sometimes the AI is wrong.
- Consider the patient context: Even accurate general information may not apply to your specific patient's situation, comorbidities, or preferences.
If you're going to share AI-generated information with patients or use it for clinical decisions, always verify against authoritative sources first. The 13-26% error rates in current studies are too high for unverified clinical use. Patients trust that you've vetted the information you share—make sure that trust is warranted.
Building a Verification Habit
The challenge isn't knowing that verification matters—it's actually doing it consistently when the AI-generated answer seems reasonable and you're pressed for time. Some strategies:
- Make it automatic: Before acting on any clinical information from AI, open UpToDate or your preferred authoritative source in another tab
- Verify the key claims: You don't need to verify every sentence—focus on the specific recommendations that would change your management
- Be especially careful with: Dosing, contraindications, drug interactions, and anything that differs from your training or usual practice
- Note the limitations: When sharing information with patients, acknowledge that you're providing general information and their specific situation may differ
Resources
Essential Reading
Tools to Try
Podcasts & Videos
Learning Objectives
- Distinguish between traditional search, answer engines, and deep research tools
- Use Perplexity effectively for clinical research questions
- Enable and use web search in ChatGPT, Claude, and Gemini
- Choose the appropriate search tool for different clinical scenarios
- Verify AI-generated search results before clinical use