Rushing Headlong Part II: Bright Future Ahead & Avoiding Past Mistakes
Building for the Future: Dr. S. Yin Ho on Responsible AI and the Next Era of Health IT
In the second half of our conversation with Dr. S. Yin Ho, author of Rushing Headlong: Health IT’s Legacy and the Road to Responsible AI, the discussion turns from history to the horizon. Having lived through the turbulent evolution of health information technology, Dr. Ho brings both optimism and hard-earned caution to the current wave of artificial intelligence sweeping through healthcare.
The case for smaller, smarter AI
Dr. Ho’s central concern is one of trust. Large language models (LLMs) dominate today’s AI landscape, but she questions whether models trained on vast, unvetted data can ever be trusted with human health. “We don’t control what they learn or what data they’re fed,” she says. “Hallucinations happen—and in medicine, that’s not a theoretical problem.”
Instead, she advocates for “small language models” — domain-specific tools trained on curated, high-quality data under strict governance. In her view, accuracy and explainability matter more than scale. “It’s not about size,” she notes. “It’s about refinement.” Smaller models can be deployed on the edge, closer to the point of care, where their training data and decision rules remain transparent and auditable.
Decision support vs. decision control
Dr. Ho emphasizes the crucial difference between AI that supports decisions and AI that makes them. The danger, she warns, lies in automation complacency—the tendency of humans to over-trust machine outputs. “At first, we review every suggestion. But over time, we just click ‘accept.’ That’s how decision support quietly turns into decision control.”
Maintaining human judgment in the loop, she insists, must remain a deliberate choice, not an afterthought. Clinicians, not algorithms, should set the limits of autonomy in healthcare systems.
The problem with “good enough” data
Data quality remains the Achilles’ heel of every health IT system. Dr. Ho points out that much of what resides in EHRs was recorded for billing, not clinical insight. Using it to train AI models risks encoding administrative bias rather than clinical truth. “We’re training on mediocre notes and expecting excellence from the output,” she cautions.
She sees promise in new methods for abstracting meaning from unstructured notes — combining human expertise with machine scalability to produce data that is both accurate and interpretable. But she warns that these efforts often remain “one-off,” tied to specific research questions rather than broad reusability. “We need to start with the most common questions in medicine and build reusable abstractions around them,” she says. “Otherwise, we’ll just rebuild the same silos, one dataset at a time.”
Avoiding another Groundhog Day
Reflecting on past missteps, Dr. Ho fears that without a course correction, AI could accelerate the same administrative burdens that once crippled EHR adoption. “Right now, the easiest use cases for AI are billing and paperwork,” she notes. “If all the investment goes there, we’ll just repeat the cycle—only faster.”
Breaking that cycle requires what she calls “rage-building” — a grassroots movement of clinicians and innovators creating systems that work for care, not compliance. She envisions small, agile teams designing alternatives that prioritize research integration, data quality, and clinician experience from the outset.
The human side of automation
Among current AI successes, ambient scribing tools promise to free clinicians from hours of note-taking. Yet Dr. Ho warns that this convenience can come at a cost. AI scribes capture words but miss nuance—the tone, the hesitation, the unspoken clues that often reveal a patient’s true condition. “Sometimes the AI summary is too clever,” she says. “It smooths out what matters.”
For her, the lesson is clear: workflow, not just workload, defines success. “We’ve spent decades replacing actions with technology,” she observes. “But we’ve forgotten to ask what those actions were meant to achieve.”
Empowering patients through access
Real patient empowerment, she argues, begins with access to one’s own data—a problem healthcare still hasn’t solved. Despite regulatory rights, patients often find themselves collecting DVDs and paper charts from multiple hospitals. “In theory, they can access everything,” she says. “In practice, they can’t.”
Dr. Ho believes startups focused on assembling and contextualizing patient data could reshape the ecosystem. “That’s step one,” she says. “Only when patients can see and understand their data can they participate meaningfully in their care.”
Looking ahead
As we look five years forward, Dr. Ho envisions a world where patients, clinicians, and researchers share a more transparent data fabric—where insights flow easily but responsibly between care and discovery. She hopes the next generation of innovators, unburdened by legacy infrastructure, will succeed where past systems failed.
Her advice for today’s clinicians is unequivocal: “Get smart about AI. The last time health IT was built, physicians were left out. There’s no excuse this time.”