The Healthcare AI Spectrum: What's New, What's Not, and What We Can Learn From Other Industries

“AI in healthcare” is too broad to be useful. It spans mature imaging tools to experimental autonomous agents. Governance should match risk and maturity, not hype. Many use cases are transferable from other industries. Clarity on autonomy and context will speed safe, meaningful adoption.

The Healthcare AI Spectrum: What's New, What's Not, and What We Can Learn From Other Industries

Why The Term "AI in Healthcare" Has Become Meaningless

Saying "AI in healthcare" today is like saying "software in healthcare" in the 1990s. Technically accurate, but so broad it obscures more than it clarifies. The term encompasses everything from decade-old imaging algorithms to experimental autonomous diagnostic agents, billing automation to gene therapy optimization, chatbots answering patient questions to systems recommending chemotherapy protocols.

This lack of clarity creates real problems. Healthcare professionals hear "AI" and imagine futuristic, unproven technology replacing clinical judgment. Technology experts assume all healthcare AI faces the same regulatory and safety barriers. Neither perspective is accurate, and both slow down adoption of tools that could genuinely improve care.

I was a medical student in the 1990s, paying my tuition by teaching researchers and senior physicians how to use computer software. I taught dBase, Lotus, and Quattro Pro for data management, early versions of Office for note-taking and presentations. I watched the exact same pattern then that we're seeing with AI today: "We don't have time to type." "Computers will distract us from patients." "What if there's an error?" "What happens when the system goes down?" The resistance was real, the concerns were valid, and the debates were heated about whether computers belonged in the intimacy of the doctor's office at all.

I later spent the bulk of my professional life jumping between practicing medicine and building enterprise-scale software solutions. That experience taught me that we need better frameworks for discussing AI in healthcare. We need to distinguish what's genuinely new from what's been running quietly for years. We need to identify which challenges are uniquely healthcare problems versus which ones can learn from other industries. And we need to match governance intensity to actual risk rather than to hype.

The Maturity Spectrum: Most "Healthcare AI" Isn't New

The first step toward clarity is recognizing that much of what we call AI in healthcare has been operational for years, sometimes decades. These aren't experimental technologies requiring novel oversight frameworks. They're mature systems with established performance records, clear use cases, and proven value.

Mature and Already Normalized

Computer-aided detection in radiology has been FDA-cleared for decades, with early mammography CAD approvals in 1998. These systems have been reading mammograms, chest X-rays, and CT scans with substantial published evaluation. They don't make autonomous decisions; they highlight areas for human review and flag potential abnormalities.

Clinical research and epidemiology have relied on predictive models for risk assessment, outcome forecasting, and population health analytics since long before "AI" became a buzzword. These models power clinical decision support tools and identify high-risk patient cohorts. They're validated, monitored, and continuously refined.

Drug discovery and development increasingly use AI for target identification, molecule generation, and trial optimization. Pharmaceutical companies employ these tools to predict drug interactions, model protein structures, and design better molecules. This is increasingly standard practice, especially in clinical development and operational workflows.

Revenue cycle management and fraud detection systems have used automated pattern recognition for years to identify billing errors and prevent fraud, waste, and abuse. These systems help detect and reduce losses estimated in the tens of billions annually by identifying anomalies impossible for humans to spot manually.

Operational forecasting for staffing, bed management, and supply chain relies on predictive models refined over years. Hospitals use these to forecast patient volume, optimize schedules, and manage inventory. These applications are borrowed from other industries and adapted for healthcare.

Robotic Process Automation (RPA) handles repetitive tasks like data entry and system transfers, just as in finance and insurance. While not always labeled "AI," RPA increasingly incorporates intelligent automation for handling exceptions.

The key insight: these mature use cases aren't where the governance debate should focus. The technology works. The challenge is integration and scaling, not proving safety from scratch.

Emerging and Safety-Sensitive

The current AI debate centers on newer tools that sit closer to clinical judgment and patient interaction, leveraging large language models (LLMs) and generative AI.

Generative AI for clinical documentation represents a significant shift. Systems powered by LLMs can record encounters, generate summaries, draft treatment plans, or suggest diagnoses. When an AI drafts a clinical note, who's responsible for errors? How do we prevent drift from "suggesting" to "deciding"?

Patient-facing symptom guidance and triage tools use conversational AI to assess symptoms, recommend care pathways, and escalate to clinicians when appropriate. They can improve access and reduce unnecessary emergency visits, but operate where patient trust, health literacy, and clinical risk intersect.

Agentic systems take actions across multiple platforms: sending messages, placing orders, routing referrals, scheduling appointments. These aren't passive tools; they execute tasks with varying human oversight. The challenge is ensuring appropriate guardrails without eliminating efficiency gains.

Real-time deterioration detection systems use continuous monitoring to predict patient decline, trigger protocols, or alert response teams. They can save lives but also generate alert fatigue if not calibrated carefully. Balancing sensitivity and specificity becomes both clinical and operational.

The question isn't "can they work?" It's "what guardrails prevent them from silently drifting into authority they weren't designed to have?"

Experimental and Exploratory

Some applications remain genuinely experimental, deployed in limited research contexts.

Closed-loop therapeutic systems that automatically adjust insulin, sedation, or ventilator settings face significant regulatory and safety hurdles before broad deployment.

Fully autonomous diagnostic agents that make clinical decisions without human confirmation remain largely theoretical.

Predictive genomic medicine that recommends personalized therapies based on genetic profiles is advancing rapidly but faces challenges in validation, equity, and clinical integration.

These deserve scrutiny and careful governance, but shouldn't dominate the conversation broadly. They're the edge, not the center.

The Risk and Autonomy Ladder: Governance Should Match Reality

The second dimension that brings clarity to healthcare AI is understanding that governance burden correlates with where a system sits in the decision chain, not with whether it uses AI.

At the informational level, AI systems explain, summarize, translate, or educate. A chatbot answering questions about medication side effects or a tool translating medical jargon operates with minimal clinical risk. These provide information; humans make all decisions.

Workflow assistance tools draft content, extract information, or route work but don't make clinical decisions. An AI that pre-fills authorization forms, extracts diagnoses for coding, or routes patient messages reduces administrative burden. The risk is primarily operational; errors slow work but rarely harm patients directly.

Decision support systems recommend, prioritize, or generate risk scores that inform clinical judgment. A model calculating readmission risk, flagging patients likely to miss appointments, or suggesting clinical guidelines provides input to decisions but doesn't make them. The clinician retains full authority to accept, modify, or reject recommendations. This is traditional clinical decision support with established governance.

Time-critical decision support operates under pressure where delays can harm patients. Deterioration detection triggering rapid response, algorithms identifying stroke to accelerate treatment, or tools flagging critical lab values sit in a higher-risk category. Urgency constrains human review time, increasing the importance of accuracy and appropriate alert thresholds.

Autonomous action systems execute changes without human confirmation. These are rare today and face the highest regulatory scrutiny. A closed-loop system adjusting medication dosing or an autonomous scheduler canceling appointments without review would fall here.

The further down this ladder, the more rigorous the requirements for clinical validation, ongoing monitoring, clear ownership, and well-defined escalation paths. But notice: this framework applies regardless of whether the underlying technology is "AI" or traditional software. The risk derives from the role in clinical workflow, not from the algorithmic approach.

What's Healthcare-Specific and What's Transferable

One of the most important and least discussed insights about healthcare AI is that many challenges aren't uniquely healthcare problems. Other industries have faced similar issues and developed mature best practices we can adapt rather than reinvent.

Healthcare-Unique Challenges

Some aspects genuinely require specialized approaches. Clinical safety and the irreversibility of certain medical decisions create constraints that don't exist in most other domains. A wrong product recommendation in e-commerce costs a returned item; a missed diagnosis can cost a life. The stakes demand different risk tolerance and validation rigor.

Semantic interoperability in healthcare is uniquely complex. Clinical terminology systems like SNOMED CT, LOINC, RxNorm, and ICD-10 encode meaning in ways other industries don't require. Understanding that "MS" might mean Multiple Sclerosis, Mitral Stenosis, or Mental Status depending on context requires healthcare-specific knowledge.

Consent, privacy, and secondary use of data face healthcare-specific regulations through HIPAA and similar frameworks globally. Rules around de-identification, data minimization, audit trails, and permitted uses are more stringent than in most industries.

Workflow reality in healthcare, including shift work, frequent interruptions, complex handoffs, and edge cases that don't fit standard protocols, creates integration challenges requiring deep clinical understanding. An AI tool that works perfectly in a quiet office might fail completely in a busy emergency department at 3am.

Equity and representation gaps matter more in healthcare because the consequences of algorithmic bias aren't just unfair outcomes, they're health disparities that compound existing inequities. When training data underrepresents certain populations, the resulting AI performs worse for those groups in ways that directly harm health.

Transferable Best Practices from Other Industries

Despite these unique challenges, many aspects of healthcare AI can and should learn from mature practices in other sectors.

From Finance: Model Risk Management

The financial services industry developed sophisticated frameworks after the 2008 crisis. These include tiering models by risk level, requiring independent validation for high-risk models, implementing ongoing monitoring for drift, and maintaining documented controls. Healthcare can adopt these frameworks almost directly, adding clinical validation but using the same structural approach.

Financial institutions distinguish between models that inform versus models that make decisions, with different validation requirements for each. They maintain model inventories, require annual reviews, and have clear escalation when models underperform. Healthcare AI governance can build on these proven patterns.

From Aviation: Safety Culture and Human Factors

Aviation's approach to safety, including checklist methodologies, human factors engineering, designing systems with no single point of failure, and strong escalation protocols, translates well to healthcare AI. Aviation learned that complex systems require layered defenses where no single failure causes catastrophic outcomes.

"Human in the loop" for high-risk decisions, standard procedures when automation fails, and rigorous incident investigation focusing on system improvement rather than individual blame all have direct applications. When implementing AI clinically, adopting aviation-style safety thinking provides a tested framework.

From Technology: Site Reliability Engineering

The tech industry's Site Reliability Engineering practices, including Service Level Objectives, error budgets, incident management, postmortems, rollback procedures, and on-call rotations, provide operational frameworks for managing AI at scale. Healthcare AI can adopt SLOs for model accuracy, response time, and availability, treating them as engineering commitments.

Blameless postmortems after incidents, where teams focus on what system failures allowed problems rather than who made mistakes, helps build learning organizations. Rollback procedures allowing quick reversion when new deployments underperform provide safety nets healthcare often lacks.

From Logistics: Observability and Chain of Custody

Logistics industries excel at end-to-end observability, tracking items through complex supply chains, managing latency and freshness constraints, and maintaining chain of custody. These practices translate directly to healthcare data and AI systems.

Knowing exactly where data came from, how it was transformed, when it was last updated, and who accessed it requires logistics-style tracking. Managing data freshness, ensuring predictions use appropriately recent information, mirrors logistics concerns about perishable goods. Healthcare can adopt these proven approaches.

From Hospitality: Service Design and Trust

The hospitality industry's focus on service design, reducing friction, managing customer expectations, and building trust through transparency provides lessons for patient-facing AI. Hotels and restaurants understand that service quality depends on meeting expectations, recovering gracefully from failures, and building trust through consistency.

Patient-facing AI tools can learn from hospitality practices around setting appropriate expectations, providing clear escalation paths when self-service fails, and designing interactions that feel supportive. "Service recovery," how you handle failures often matters more than avoiding all failures, applies directly to healthcare AI.

Commodity Use Cases: Direct Transfer with Healthcare Constraints

A significant portion of "healthcare AI" consists of use cases that are essentially commodities in other industries, requiring only healthcare-specific privacy and compliance layers.

Document automation and extraction, which involves pulling information from faxes, PDFs, and unstructured text, is standard practice in finance, insurance, and legal industries. Healthcare adds HIPAA compliance and clinical terminology understanding, but the core technology is mature and transferable.

Contact center optimization using AI for call routing, chatbots for common questions, and sentiment analysis for quality monitoring works the same in healthcare as in banking or retail. The difference is ensuring appropriate escalation to clinical staff and maintaining privacy controls.

Scheduling and routing optimization, which involves matching supply with demand, reducing wait times, and optimizing staff allocation, draws directly from logistics and hospitality. Healthcare adds clinical acuity considerations and regulatory constraints around access, but foundational algorithms are well-established.

Anomaly detection for billing integrity, claims outliers, and operational incidents uses the same statistical approaches proven in fraud detection across industries. Healthcare just needs to ensure clinical context appropriately informs what counts as an anomaly.

Platform reliability monitoring, including tracking system uptime, performance, error rates, and user experience, is identical across industries. Healthcare systems need the same observability, alerting, and incident response as any mission-critical software.

Forecasting for demand, staffing, and inventory management uses predictive approaches standard in retail, manufacturing, and logistics. Healthcare adds complexity around clinical urgency and regulatory requirements but doesn't require fundamentally different approaches.

The key insight: for these commodity use cases, healthcare doesn't need to build entirely new AI governance frameworks. We can adopt established controls from other industries, add healthcare-specific privacy and audit requirements, and move forward. The governance challenge isn't novel; it's adaptation.

A Framework for Moving Forward

Understanding healthcare AI through these two dimensions, maturity and risk level, enables more productive conversations than treating all AI as uniformly novel and risky.

For mature, normalized AI in imaging, research, drug development, and operations, the focus should be on scaling, integration, and continuous improvement rather than proving concepts or debating whether AI should be used at all.

For emerging, safety-sensitive AI in documentation, triage, and agentic workflows, the conversation should center on appropriate guardrails, clear accountability, monitoring for drift, and preventing silent expansion of authority beyond designed boundaries.

For experimental AI in closed-loop systems and autonomous diagnosis, rigorous research protocols, limited deployment with extensive oversight, and careful evaluation before broader rollout remain appropriate.

Across all categories, governance intensity should match actual risk level based on the decision chain, not on hype or fear. Informational tools require different oversight than autonomous action systems, regardless of the underlying technology.

And critically, healthcare should look beyond its own borders. Finance, aviation, technology, logistics, and hospitality have solved problems that look remarkably similar to challenges healthcare faces with AI. We should import proven best practices, adapt them for healthcare-specific constraints, and avoid reinventing governance frameworks that already exist and work.

The Path Forward

The healthcare industry doesn't need a single "AI governance framework." It needs nuanced approaches that match governance to actual deployment patterns and risk levels. It needs to stop treating decade-old imaging algorithms the same as experimental autonomous agents. It needs to recognize that many AI use cases are essentially shared with other industries and can adopt proven practices with healthcare-specific adaptations.

Most importantly, we need to move beyond the unhelpful binary of "AI is either revolutionary or dangerous" toward a more mature understanding: AI is a set of tools, some old and some new, some high-risk and some low-risk, some uniquely healthcare-focused and some shared across industries. The governance question isn't "should we use AI in healthcare?" It's "what governance matches each specific use case based on maturity, risk, and whether we can learn from other industries?"

When we frame the conversation this way, we can accelerate adoption of genuinely helpful tools, apply appropriate scrutiny to high-risk applications, and stop debating whether to use AI at all. The question was never whether electricity would transform healthcare. It was how to deploy it safely and effectively. The same applies to AI.

The organizations that understand this nuance, that can distinguish mature from emerging, high-risk from low-risk, healthcare-specific from transferable, will move faster, deploy more safely, and deliver more value than those treating all "AI in healthcare" as a single, undifferentiated category.

The spectrum is wide. The sooner we navigate it with clarity, the better.

Subscribe to Data, Decisions and Clinics

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe