Dr. Yoram Friedman

Dr. Yoram Friedman

(58)

Dr. Friedman is a physician turned product manager with 20+ years building enterprise software and leading digital transformation. He writes about the intersection of technology, human behavior, and healthcare, where solutions directly impact lives.

The Blueprint Was Already There

The Blueprint Was Already There

This week I read three papers that made me happy. JAMA. NEJM. Nature Medicine. All randomized trials. All showing AI outperforming standard care. Then I read the methodology. None of them LLMs. The AI winning in top journals in 2026 was built before the hype cycle. The blueprint was always there.

The 60-Point Gap: Why We're Measuring the Wrong Customer

The 60-Point Gap: Why We're Measuring the Wrong Customer

A Nature study shows LLMs achieve 94.9% accuracy on benchmarks but only 34.5% when laypeople use LLMs on physician-created scenarios. The gap reveals something deeper: We measure the model in isolation. We deploy to a human in distress. The system fails at the intersection.

Governance: The Word Everyone Uses and Nobody Agrees On

Governance: The Word Everyone Uses and Nobody Agrees On

Everyone talks about governance. Nobody agrees on what it means. Data governance, AI governance, master data governance: they're not separate programs. They're one spectrum. And most enterprises already have 70% of what they need. They just can't see how the pieces connect.

The Architect Who Should Have Read JAMA

The Architect Who Should Have Read JAMA

In healthcare AI, policy shifts now appear first in journals like JAMA and NEJM, then quietly become grant conditions and RFP requirements. Many tech teams miss this signal. The winners will be those who translate medical literature into architecture before it becomes mandatory.

Do No Harm, Encoded

Do No Harm, Encoded

Asimov gave robots three non-negotiable laws. Medicine gives physicians an oath. Healthcare AI has governance, but no runtime constitution. Until safety principles are enforced at the moment of output, not just in policy documents, we are deploying systems without the equivalent of “do no harm.”

Trained on the Wrong End of the Story

Trained on the Wrong End of the Story

A Nature Medicine study found ChatGPT Health under-triaged 52% of real emergencies. The deeper issue may not be the model, but its training data: AI learns from documented hospital records, yet it is deployed at first contact, where the most critical signals were never captured.

Stop Waiting for Clean Data

Stop Waiting for Clean Data

The healthcare data integration problem is 20 years old and not going away. So why are we still building AI that assumes clean data? A case for designing AI that works in the real world, not the one we keep promising to build.