What the Next Generation of Physicians Won't Remember
I don't remember much from school. My memory starts the day I entered clinical training. Medicine is learned by burning encounters into memory. AI is very good at removing that weight. The weight was the learning.
I do not have many vivid memories from school. High school left little impression. What I remember clearly, with the kind of detail that does not fade, starts the day I entered clinical training.
I remember my first pediatric patient and the look on the family's face. I remember my first adult patient. I remember the patients we saw in the emergency department, the ones we debated at grand rounds, the first patient I saved during CPR and the many patients I was not able to save. I remember the mistakes I made in my first year of residency, each one of them, and I remember how we salvaged every single one.
I remember because this is how you learn medicine. You observe. You try. You succeed and you fail. Those experiences burn themselves into memory in a way that a lecture, a textbook, or a simulation cannot replicate. The learning is inseparable from the weight of the encounter.
AI is very good at removing weight.
What Gets Built in the Difficulty
The apprenticeship model that trained many physicians I know was built on conditions that look inefficient from the outside. Long hours. Repetitive exposure to common cases. Graduated responsibility that moved slowly. Correction delivered in the moment, sometimes in front of a patient, sometimes in front of peers. It was not designed for comfort. It was designed, whether anyone said so explicitly or not, for the kind of learning that only happens under pressure.
Three things happened simultaneously in that model. You accumulated knowledge by carrying it yourself, with no shortcut to retrieve it. You built reasoning by performing it independently, repeatedly, until differential diagnosis stopped being a checklist and became something closer to instinct. And you made mistakes, small ones, caught before they caused harm, and those mistakes were the most durable lessons of all.
AI is now changing all three conditions at once.
When a clinical decision support system surfaces the likely diagnosis before the trainee has attempted one, the knowledge retrieval that builds memory never fires. When an algorithm generates the differential, the reasoning muscle that produces independent clinical judgment atrophies. And when AI reduces the rate of diagnostic error, which it does, and which is largely a good thing for patients, it also reduces the rate of the corrective experience that medicine has always used to teach.
You cannot learn from mistakes you are no longer making.
The Evidence We Have
The ACCEPT trial, published in 2024, studied AI-assisted colonoscopy at a teaching hospital in China. AI guidance improved adenoma detection rates during the intervention period. Then the AI was withdrawn. Detection rates among trainees who had practiced with AI assistance dropped below their pre-intervention baseline. From 28.4% to 22.4%. The trainees who trained with AI had built less robust independent detection skill than those who had trained without it.
This is one study. It should not be over-generalized. But it is the clearest data point we have for a phenomenon that is almost certainly not limited to endoscopy. It describes what happens when cognitive work is outsourced during skill formation. One plausible explanation is that the AI was handling part of the pattern recognition the trainee's brain needed to develop independently. When the AI left, the capability had not been built.
The second pressure is less visible and more structural. As AI triage improves, as wearable monitoring catches early pathology before it escalates, as direct-to-consumer platforms manage lower-acuity cases outside the clinic, the patients arriving at teaching hospitals will increasingly be the complex, high-acuity end of the distribution. This sounds like richer training material until you understand that clinical reasoning is built on the common cases first. Internists learn to recognize sepsis by seeing pneumonia fifty times. The diversion of ordinary cases from clinical environments removes the conditions under which that foundation gets built. Trainees will see more complexity with less foundation to interpret it.
The End State Nobody Is Discussing
If these pressures compound over two decades, the physician workforce of 2045 will have been trained almost entirely in AI-augmented environments, with limited exposure to independent diagnostic reasoning, limited case volume without algorithmic assistance, and limited experience of the corrective failure loop that medicine has always depended on.
That workforce will likely perform well by most measurable clinical metrics. The AI will support them well. Outcomes may be better than today.
But when something goes wrong with the AI, they will have no floor.
The concept of "human in the loop" rests on an assumption that the human brings independent judgment capable of catching what the algorithm misses. If the human was trained inside the algorithm, that judgment was never built. The loop has a human in it. The human inside the loop may not possess the judgment the loop assumes.
The tech industry has been through its own version of this reckoning. AI is rewriting what software engineers do, what they need to know, and which skills are valued. A developer whose role is disrupted can retrain in three months. The equivalent disruption in medicine arrives to a workforce with a fifteen-year training pipeline. Many of the physicians practicing in 2040 are in training now. The curriculum decisions being made today will determine whether that workforce has genuine independent clinical capability or whether it is entirely dependent on systems that, as the ACCEPT trial suggests, may be quietly degrading the very judgment meant to supervise them.
The healthcare community is not moving at the speed this problem requires.
What Will Not Transfer
I was trained by physicians who learned medicine before digital imaging, before electronic records, before decision support of any kind. They carried everything in their heads, built from decades of direct patient contact.
I remember sitting in weekly radiology sessions with a professor who would describe the patient's condition as he moved through a scan. Not by measuring. By reading. Pattern recognition so deep it had become automatic.
But the image that stays with me most clearly is an internal medicine professor who would walk into a room and, within seconds of crossing the threshold, tell us the patient's approximate glucose level and whether their bilirubin was elevated. From the smell, as he described it to us. From something in the air that his nervous system had learned to decode over thirty years of seeing patients no one else could figure out.
That knowledge is not in any textbook. It cannot be uploaded. It was built through thousands of physical encounters with patients, corrected by senior physicians who had built the same capability the same way, and transmitted by putting trainees in the room and asking them to pay attention.
I am not certain that form of knowledge will exist in the generation being trained now. Not because they are less capable or less committed. Because the conditions for building it are disappearing.
The cases that would have developed it are being resolved before they reach the clinic. The mistakes that would have sharpened judgment are being caught by algorithms. The volume that would have built pattern recognition is being filtered toward the complex end, with the common cases gone. And the attending who carries that embodied knowledge is closer to retirement every year, with no clear mechanism for transmission.
Medicine has always learned by remembering.
I am not sure what replaces that.
This is the fourth article in a series on the changing architecture of patient care. Previous pieces: "Patients Are Not Waiting for Permission," "The Parallel Health System," and "Who Owns the Patient Relationship Now?"