Leon Rozenblit + Steve Labkoff copy

The AI Works. Now What?

The convergence nobody planned

We didn’t ask our guests to coordinate. Giovanni Donatelli builds medical translation infrastructure. Charlie Harp measures data quality. Adam Blum matches cancer patients to clinical trials. Shashi Shankar is building patient-authorized real-world data sets. Amy Price advocates for patients as co-designers of their own care.

Five completely different domains. Five working AI systems. And every one of them spent most of their time talking about the same thing: the infrastructure that makes AI useful.

That wasn’t our prompt. It was their experience. And the convergence tells us something about where healthcare AI is right now.

The foundation problem

Harp’s PIQI framework measures data quality against USCDI standards. The results are sobering: lab data quality across healthcare averages about 70%. That means 30% of the data feeding AI systems is wrong, missing, or inconsistent. As he put it in language we’ve borrowed and plan to keep using: “We’re building pipes, but we haven’t decided if they’re sewer pipes or water pipes.”

The timing matters. AI didn’t create the data quality problem. Decades of purpose-built systems did — optimized for billing, not for interoperability. AI made it visible because AI tries to use data for something beyond storage and claims submission. Then regulation caught up. Section 1557 enforcement got stricter in 2025, and data quality became a boardroom problem.

The scaffolding thesis

If the data is the foundation, Blum’s work is the next layer up. His CancerBot gets 60% accuracy using a raw LLM on clinical trial eligibility. Add structured prompt engineering — where medical experts encode their domain knowledge into a Prompt Workbench — and that jumps to 90%.

That 30-point gap wasn’t closed by upgrading the model. It was closed by building better structure around it. The model is a commodity. The scaffolding is the product.

This directly challenges the dominant narrative in healthcare AI: that what matters most is which frontier model you’re using. Blum’s evidence says what matters is how much domain expertise you’ve encoded into the system surrounding it.

The human role, restructured

Donatelli gave us the clearest workforce evolution template we’ve had on the podcast. His translators don’t get replaced — they shift from doing translation to training AI and quality-assuring its output. “There’s a future for my translators” isn’t a platitude. It’s a job description change.

The pattern extends. Shankar is using AI to curate patient data for other AI systems — normalization, deduplication, coding across disparate records. That recursive infrastructure used to be the graveyard for patient data integration projects. Now the tools exist to make it work.

And Price tied it together with a distinction that reframed our thinking: healthcare AI isn’t broken. It’s unfinished. A broken system means you throw it out. An unfinished system means there are errors along the way — it’s part of the process. The frameworks exist: Cochrane reviews, FHIR, Section 1557, the 21st Century Cures Act. They just haven’t been connected yet.

What maturation looks like

We’ve been building a thesis across 30 episodes and four reflections blocks. Block 1 established the vocabulary — mundane wins matter, governance over regulation. Block 2 stress-tested it against history. Block 3 tested it against evidence. Block 4 reveals the infrastructure layer: data quality, scaffolding, human roles, patient agency.

The trajectory isn’t dramatic. It’s a quiet shift in what the conversation is about. From “will this work?” to “what does the system need to look like?”

That shift — from argument to construction — might be the most reliable maturation signal there is. Not a eureka moment. Just builders building.

Listen: https://practicalaiinhealthcare.com/episodes/#S1E31