The Search

Medical information used to be hard to find. Then easy to find. Now it finds you. At each step we celebrated the progress. We are still figuring out what we lost.

Dr. Yoram Friedman
7 min read
The Search

The library at the Hebrew University Faculty of Medicine was an impressive building.

I spent a lot of time there as a medical student in Jerusalem in the early 1990s. The first floor was given over almost entirely to Index Medicus, the monthly printed bibliography published by the National Library of Medicine. Thousands of volumes, in different shades of brown and orange, organized by subject heading and author across more than five thousand journals worldwide. It was not the articles themselves. It was a map to where the articles lived.

Preparing for journal club meant walking into that room with a topic, finding the right subject heading, writing down citations by hand, and then moving to the microfilm reader in the corner to retrieve the articles themselves. Some articles were in German. Some were in French. I once spent twenty minutes finding a classmate who could help me read an abstract in a language I did not have.

The friction was the entire experience of medical information access. You did not browse. You searched with a purpose, retrieved with effort, and read what you could reach.

I thought about that library recently when I opened Amazon and found a health AI on the homepage. I had not searched for it. It was simply there, in the same place where I might reorder coffee or check a delivery.

Forty years. One question the whole way through: who gets to know what, and what does it cost them to find out?


The index

For more than a century, Index Medicus shaped how physicians related to evidence in ways that were invisible because they were universal. You did not encounter every paper on a topic. You encountered the papers you could reach, in the journals your library subscribed to, in the languages you could read, through the microfilm spools that happened to be intact.

What the friction produced, without anyone designing it this way, was curation through work. You did not casually consult the literature. You consulted it when the question was worth the effort. The effort itself was a filter, and an unspoken triage about what was worth knowing.


The index goes digital

MEDLINE, the digital version of Index Medicus, launched in the early 1970s. For its first two decades, searching it required a trained medical librarian and access to specialized terminals. The content was digital. The access was still rationed.

PubMed changed that in 1997. Free, public, searchable from any internet connection. The discovery problem largely dissolved for any physician with internet access. The two-step process, find the reference then retrieve the article, collapsed into one step for open-access content and remained two steps for subscription journals.

What changed: the physician could now browse. Serendipity became possible. Following a citation into adjacent literature, discovering a paper you were not looking for, wandering through a field rather than conducting a targeted retrieval. The shape of how physicians engaged with evidence shifted when the cost of exploration dropped.

What did not change: the interpretation barrier. Reading the abstract was not the same as understanding the study. Evaluating a randomized controlled trial, assessing a systematic review, knowing which findings applied to the patient in front of you: these still required clinical training. Access had been democratized. Comprehension had not.


The information moves to the pocket

As a young resident, I tried almost every iPhone app that promised to put clinical information at the point of care. Drug dosing calculators. Lab reference ranges. Differential diagnosis tools. The iPhone arrived in 2007; within two years there was an ecosystem of tools built for the physician who needed an answer in the three minutes between patients.

Medscape offered free, ad-supported drug and disease information; Epocrates and similar tools brought interaction checks and formulary data to the prescribing moment; Doximity created a credentialed network for sharing literature and cases among verified physicians.

UpToDate was the gold standard. Subscription-based, written by clinical specialists, updated continuously, organized to answer the specific question a physician had at the point of care. The tool that replaced "ask the attending" for well-defined clinical questions.

What the professional information layer achieved: it closed most of the gap between having a question and getting a synthesized, evidence-based answer. What it did not close: the gap between having the answer and applying it correctly to the specific patient in front of you. That gap, from evidence to clinical judgment for this person, remained the physician's job. The tools were aids. The responsibility was not transferable.


The consumer discovers they can search too

While physicians were building this professional information layer, patients were discovering the same internet.

Google made health information findable for the first time. Before it, health content online was fragmented and unreliable. After it, any symptom cluster could return a list of results that mixed accurate clinical information, outdated studies, health misinformation, and patient forums in a single page. The information was accessible. The consumer had no tool to distinguish between what came from a peer-reviewed study and what came from a message board.

WebMD translated professional health content for a non-clinical audience: useful for general orientation, risky for specific self-diagnosis. Patient forums and condition-specific communities were often more nuanced, written by people living with the condition, but vulnerable to anecdote displacing evidence.

The physician response to all of this was largely: "Don't Google your symptoms." A genuine reflex, born from watching patients arrive with printouts of worst-case scenarios for their mildly sore throat. It was also a failure to engage with what was actually happening. The patient was doing what made sense given their access to information and their anxiety about their health. Telling them to stop was easier than building something better.

Nobody built something better for a long time.


The interpretation barrier moves inside the model

ChatGPT arrived in late 2022 and changed the structure of the problem.

Google returned a list of results and required the user to read, evaluate, and synthesize. ChatGPT returned a synthesized answer in natural language. The interpretation barrier, the gap between evidence and conclusion, moved inside the model. The consumer no longer had to evaluate sources. The model evaluated them and presented a conclusion.

For patients, this was the first time a consumer tool could take a symptom description and return something that resembled a clinical assessment. Fluent, organized, confident. Without citations. Without awareness of the individual patient's medications, history, or context. Without any calibration for the difference between "this is well-established" and "this is a plausible inference from insufficient data."

The clinical risk is not that the model answers badly in obvious ways. It is that it answers with equal fluency regardless of certainty. UpToDate is written by specialists, reviewed continuously, and cites its sources. A general-purpose LLM is trained on a corpus that includes medical literature, health forums, Wikipedia, and much else besides, and cannot tell you which parts of its answer came from which kind of source.

The access problem, the problem the brown and orange volumes represented, was solved. A new problem had replaced it.


Both lanes upgrade at the same time

The professional information layer did not stand still while consumer AI advanced.

UpToDate integrated AI to enable natural language queries of its curated, specialist-reviewed evidence base. OpenEvidence built an AI-powered retrieval system specifically for clinicians, grounded in peer-reviewed literature with citations. The professional lane, upgraded: synthesized, evidence-based answers, now accessible through conversational queries.

Consumer tools like symptom checkers added AI layers to bring more structure to patient-facing interfaces. Better than raw general-purpose LLMs. Still without the curated evidence base that the professional tools could claim.

From the outside, both lanes now look similar: conversational interfaces that return fluent, synthesized answers. The underlying difference in evidence quality and clinical rigor is harder for the consumer to perceive, not easier. The outputs converged. The foundations did not.


It was just there

On a morning when I had a splitting headache, I opened Amazon.

I was not looking for health information. It appeared on the homepage. No download, no subscription, no separate application. Amazon Health AI was simply there, in the same surface where I might check a delivery or reorder something from my history. I asked about my headache. A long, thorough, fluent answer arrived.

My first reaction, as a product manager, was a design observation: when you have a splitting headache, the last thing you want is a wall of well-structured text. The information was medically appropriate. The format was not. A system designed to answer health questions should calibrate its output to the state of the person asking.

My second reaction was to look at what the AI was actually the entry point to.

Amazon Health AI is not a healthcare product. It is a healthcare funnel. Ask a health question; receive a synthesized answer and an offer for five free One Medical visits. Stay engaged; receive a membership offer. Fill a prescription; Amazon Pharmacy is already there. What Amazon has done that no prior consumer health product managed is make the first point of health contact frictionless enough that healthy people encounter it before they decide they are patients. Every prior system waited for the patient to be worried enough to seek care. This one is present before the seeking begins.

Amazon Health AI also uses real-time auditor agents running alongside the primary conversational agent, a secondary layer that monitors outputs for safety and escalation triggers, with human providers available when the threshold is met. This is architecturally more serious than a general-purpose LLM with a health disclaimer.

The trust question is real and worth sitting with. Amazon has thirty years of trust built on convenience, price, and delivery reliability. That trust is earned and genuine in its domain. Whether it transfers to clinical guidance is a different question, and the millions of people who will use this tool are not asking it. They are asking about their headache.


What the library was also doing

Back to the building with the brown and orange volumes.

The friction in that library was a barrier, and something else. PubMed removed the barrier. Google extended that removal to everyone. LLMs removed the interpretation step. Amazon removed the moment of seeking itself.

Each reduction in friction was celebrated as progress, and in many respects it was. More access, more equity, more people able to find information that might help them. These are real gains.

What nobody anticipated was that the friction was also doing something the new systems do not do. It was creating the space in which the user had to decide whether the question was worth asking. Index Medicus did not answer questions. It told you where to look, and the effort of looking was a built-in prompt to consider whether looking was necessary. UpToDate synthesized, but for a trained reader who arrived with a clinical question already formed. Even a general-purpose LLM required the patient to find it and open it.

Amazon Health AI appeared on the homepage before I decided I needed it. The friction is gone. The judgment the friction forced, is this information worth seeking, am I unwell enough to pursue this, what is the actual question I need answered, is gone with it.

The patient now has access to more medical information, delivered more fluently, with more confidence, and with less accountability attached to it, than at any point in the history of medicine.

The library was a barrier. It was also a filter.

We removed the barrier. We have not yet decided what to do about the filter.

Share