Leon Rozenblit + Steve Labkoff copy

Reflections from the Frontlines of Practical AI in Healthcare

After two months of conversations with some of the most innovative minds in healthcare AI, Practical AI in Healthcare co-hosts Dr. Steven Labkoff and Dr. Leon Rozenblit took a step back in Episode 8 to reflect on what they’ve learned—and what’s still taking shape. Their discussion covered the evolving meaning of “practical” AI, the tension between innovation and regulation, and the human factors shaping this technological revolution.

From startup founders to senior pharma and IT executives, the podcast’s early guests have revealed both the extraordinary promise and persistent uncertainty of this moment. As Steve noted, “We are still very early—closer to the Wright Brothers than the Space Shuttle.” Like aviation’s first century, the journey of AI in healthcare will be full of unexpected turns, false starts, and remarkable breakthroughs. What matters now, both hosts agree, is learning how to steer through the noise toward the real signal.

The Role of Governance, Not Just Regulation

One key insight was the need to distinguish between _regulation_—formal rules imposed by government—and governance, the broader system of oversight, responsibility, and trust that must evolve as AI becomes embedded in care and research. “Too much regulation too soon can stifle innovation,” Steve reflected, “but a lack of governance undermines trust.” Leon agreed, suggesting that the goal of governance isn’t to increase success but to “reduce the severity of failure”—a lesson drawn from innovation history itself.

AI Literacy: Beyond Prompt Engineering

Another recurring theme is the urgent need for AI literacy across all healthcare stakeholders—clinicians, patients, and researchers alike. As Steve emphasized, literacy is not about mastering prompt engineering; it’s about understanding when and how to trust AI output and how to think critically about what it delivers. For physicians, that means integrating AI as a clinical collaborator while maintaining professional judgment. For patients, it means learning how to use AI as an empowerment tool without being overwhelmed by its responses.

Data Stewardship and the Trust Problem

Quality data remains the lifeblood of safe, effective AI. Poorly curated datasets can encode bias, distort models, or produce misleading results. Steve underscored that “good data stewardship isn’t optional—it’s foundational.” Without transparency about how data is gathered, used, and validated, trust in AI will crumble. Leon added that the goal isn’t perfection but appropriate engineering—“we can often deal with noise, but we can’t deal with opacity.”

Practical Wins over Grand Slams

One area where Steve’s perspective shifted most was around the value of “small wins.” Initially seeking large, transformative breakthroughs, he came to see the power of incremental, practical gains—what he and Leon call “everyday AI.” Citing Brendan Arbuckle’s three buckets of AI (everyday, research, and operational), Steve noted that success in the mundane can be just as transformative as high-profile scientific discoveries. “Like in Moneyball, singles matter,” he said.

A Moving Frontier

Leon introduced the idea of the “jagged frontier,” borrowed from Ethan Mollick’s Co-Intelligence, to describe AI’s uneven and unpredictable performance. Humans expect consistency—yet AI excels in one task and fails spectacularly in another. Over time, he predicts, engineering and design—not just better models—will “smooth that frontier.”

As the episode closed, both hosts agreed that healthcare’s AI revolution is accelerating, but its success will depend less on the technology itself and more on the people guiding it—those who understand its potential, its limits, and its ethical obligations. “We’re trying to surf the avalanche,” Leon said, “and the only way to stay upright is to keep learning.”