Leon Rozenblit + Steve Labkoff copy

At the Inflection Point: What Practical AI in Healthcare Is Teaching Us Right Now

Healthcare is once again at a moment of profound transition. Roughly every few decades, a force emerges that reshapes how care is delivered, measured, and experienced. In the 1990s, it was the internet and the rise of electronic health records. Today, it is artificial intelligence. And as with past transformations, the promise is enormous—along with the risk of repeating old mistakes.

Across recent conversations on Practical AI in Healthcare, one idea surfaced again and again: this is not incremental change. AI represents a true inflection point.

AI as a Structural Shift, Not a Feature Upgrade

Capabilities that were once aspirational—real-time clinical decision support, large-scale data synthesis, and simulation of complex biological systems—are now operational or rapidly approaching maturity. Tasks that once required months of work can now be completed in seconds. This shift is already reshaping clinical care, research, life sciences, and patient engagement.

But speed alone does not guarantee progress. History shows that when transformative technologies are introduced without sufficient reflection, they can distort workflows and priorities rather than improve them.

Are We Repeating the EHR Era’s Mistakes?

Several discussions raised an uncomfortable but necessary question: are we about to repeat the errors made during the rapid rollout of health IT and EHRs?

Those systems delivered undeniable wins—genomics, imaging, population-scale analytics—but they also imposed significant burdens on clinicians. Documentation increasingly served billing, compliance, and measurement rather than patient care. The fear is not that AI will fail technically, but that it will be layered on top of broken processes, accelerating dysfunction instead of alleviating it.

You Can’t Automate on a Shaky Foundation

One of the most consistent themes was the importance of data governance and trust. Automation cannot succeed without reliable data, clear ownership, and transparent stewardship. Yet many health systems still struggle to answer basic questions: What data do we have? How trustworthy is it? Who is accountable for its use?

Without strong governance, AI models risk amplifying bias, reflecting localized practice patterns, or producing outputs that appear authoritative but are fundamentally flawed. Trust—between clinicians, patients, institutions, and technology—must be built intentionally, not retrofitted after deployment.

Bright Spots in Clinical Trials and Research

While caution dominated some areas, optimism was especially strong in the clinical trials domain. AI-assisted protocol design, simulation, and real-time analytics offer the potential to address long-standing inefficiencies that derail studies—not because drugs fail, but because processes do.

The ability to simulate trial designs before enrolling patients could reduce wasted effort, improve endpoint selection, and accelerate development timelines. While skepticism remains warranted—models are only as good as their assumptions—the early results suggest real value rather than hype.

Patients Are Already Using AI—Ready or Not

From the patient perspective, one reality is unavoidable: patients are already using AI tools to interpret test results, research conditions, and guide decisions. There is no effective mechanism to stop this.

That reality shifts responsibility toward creating better guidance, clearer signals of trustworthiness, and higher-quality information sources. Tools without patient agency—or worse, tools that obscure limitations—risk harm rather than empowerment.

Rethinking AI Literacy and Human Judgment

The conversations also challenged simplistic notions of “AI literacy.” This is not about technical fluency or learning how models work under the hood. Instead, literacy means judgment: knowing when to trust an output, when to question it, and when human expertise must intervene.

This echoes earlier moments in information history—from the printing press to the internet—when societies struggled to distinguish signal from noise. Each time, the challenge was not access to information, but interpretation.

Technology Won’t Fix Broken Incentives

Perhaps the most sobering insight is that AI will accelerate whatever structures already exist. If incentives, workflows, and power dynamics are misaligned, AI may deepen the problem rather than solve it. Technology is not neutral—it amplifies.

And yet, there is room for cautious optimism. If AI can create efficiency and slack—relieving administrative burden rather than compounding it—that space could be used to redesign care more thoughtfully.

Staying in the Conversation

The work ahead is not about chasing novelty or declaring victory too soon. It is about staying engaged: asking sharper questions about governance, workflows, responsibility, and trust.

This inflection point is real. Whether it bends toward better care or faster dysfunction will depend less on algorithms—and more on the choices made around them.