The Legal Reality of AI in Healthcare: Risk, Liability, Privacy, and What Happens Before Regulation Arrives
AI in healthcare is moving faster than most legal and operational frameworks. That gap creates excitement, but it also creates risk. In this episode of Practical AI in Healthcare, we spoke with two professionals who spend their days thinking about exactly that gap: Kathy Roe, a longtime health lawyer and principal at Health Law Consultancy, and Kenny White, director of the Managed Care Industry Group at Alliance Insurance Services and a veteran health-law professional turned risk and insurance advisor.
The conversation was not theoretical. It was about what happens when AI meets the real machinery of healthcare: clinical decision-making, payer workflows, privacy obligations, and the messy question of who is responsible when something goes wrong.
Risk is broader than legal compliance
Kenny started with a plain-language framing that is easy to miss in AI discussions. Risk is not only about breaking a rule. Risk is exposure to harm across many categories: financial, operational, reputational, political, ethical, and legal. If an organization uses AI in any form, from chatbots to clinical pathways, it is creating new exposures. The first step is not buying a policy or writing a disclaimer. It is identifying what risks you are actually creating.
That sounds obvious, but it is a useful reset. Many organizations jump directly to “What does the law say?” when the more practical early question is “What can break, who gets hurt, and what downstream consequences follow?”
Medical malpractice, product liability, and the problem of accountability
When the discussion turned to medical malpractice, Kathy highlighted a core issue: in the AI context, risk can come from at least two places.
First, the AI tool itself may behave like a product. If the tool is defective, fails in predictable ways, or is not tested appropriately, the question of product liability enters the picture.
Second, the clinician’s conduct still matters. Even if an AI system is involved, courts and regulators will examine whether the physician or other healthcare professional met the applicable professional standard of care in deploying, interpreting, and acting on the tool’s output.
The hard part is that, right now, there is no clean line for where the tool’s responsibility ends and the clinician’s responsibility begins. We talk constantly about “human in the loop,” but that phrase carries an implication: if a human is in the loop, a human may be accountable.
Kenny added an important legal nuance. Medical malpractice is generally tied to licensed professionals. AI systems do not have licenses. That creates an awkward fit if you attempt to apply traditional malpractice frameworks directly to software behavior. It also raises a practical question for health systems and clinicians: if a tool influences decision-making, and it produces harm, what is the cause of action, and who becomes the defendant?
That is not an abstract question. It shows up in adjacent areas already, such as advanced robotics and remote or semi-autonomous procedures. If a clinician chose the modality, but a machine executed it and the supporting ecosystem includes technicians and software engineers, liability can become a “law school exam” problem, as Kenny put it. Yet these are exactly the kinds of workflows healthcare is moving toward.
Privacy and de-identification in an era of easy re-identification
Leon pushed the conversation toward the reality that technology changes the assumptions underlying legal standards. Kathy brought it back to HIPAA and the concept of “minimum necessary.” AI training often benefits from more data, but HIPAA’s structure is designed to limit collection and disclosure to what is necessary for the purpose. That tension is only going to grow.
De-identification is another pressure point. HIPAA offers two main paths: the Safe Harbor method (removing specified identifiers) and the expert determination method (statistical assessment). Kathy noted the practical challenge: expert determination is more rigorous, but it costs time and money, and it can feel slow in a market that is demanding speed.
The deeper issue is that Safe Harbor can feel less safe as data science advances. The more external datasets exist, and the more powerful cross-referencing becomes, the easier it may be to re-identify records that used to be considered de-identified “enough.” That does not automatically mean every dataset is re-identifiable, but it does mean organizations should treat de-identification as a living risk decision, not a one-time compliance checkbox.
IP, training data, and the limits of “scrape first, apologize later”
On intellectual property, Kenny emphasized a simple principle: AI does not exempt anyone from existing IP laws. Ownership, permissions, likeness rights, copyrights, and contractual restrictions still apply.
A practical example is model training. If you are training AI systems on modern scientific, medical, or proprietary content, you need to have rights to use that content. Public-domain material is one thing. Current copyrighted or contract-restricted material is another. This is part of why we have already seen lawsuits over training datasets and use of copyrighted material.
And in healthcare, those issues often compound. Training data can implicate both IP and privacy simultaneously, especially if it includes clinical text, images, or other data derived from patient care workflows.
If regulation lags, where do the rules come from?
A major theme in this episode is that law is slow by design. That does not mean nothing governs AI. It means governance will often be assembled from multiple sources before any comprehensive statutory regime arrives.
Kathy outlined where pressure and structure can emerge in the near term:
• Courts, through litigation and case law
• Contracts, through negotiated terms, indemnities, and limitations of liability
• Institutional policies and governance structures, created internally
• State-level activity in specific use cases, especially where harms become visible
Kenny was even more blunt: we are “flailing about in an ocean of chaos,” and much of the immediate progress will come from organizations policing themselves, because formal rulemaking cannot keep pace with capability changes.
Insurance will move, too
One of the most practical parts of the discussion came near the end: insurance. Kenny warned that insurers and underwriters will adapt to AI-driven claims and exposures. That can show up as exclusions, new underwriting requirements, or coverage shifts. Cyber coverage is already seeing early movement in this direction, and it is reasonable to expect similar evolution in professional liability and errors-and-omissions lines as AI use becomes more common.
This matters because many organizations assume they are covered for “what they have always done.” AI may change what is considered “what you are doing,” even if the organization experiences it as a normal operational upgrade.
What clinicians should watch for right now
Steven posed the practical clinician question: will doctors be penalized if they use AI and it is wrong, or if they do not use AI when they “should have”?
Kathy’s answer was a strong anchor: today, the standard of care is not universally defined as “use AI.” But the standard of care can shift as peers adopt tools and as expectations change. The responsible posture for professionals now is literacy: learning what these tools can and cannot do, understanding their limitations, and being prepared for a future where use may become expected in certain settings.
Kenny added a courtroom reality: juries may trust “the computer” in ways that create new risk for clinicians who deviate from AI recommendations, even when clinical judgment is defensible. That social and cultural dynamic will matter, not just the technical performance of the tool.
The bottom line
AI in healthcare is not waiting for perfect laws, perfect definitions, or perfect accountability models. That means health systems, payers, vendors, and clinicians need to act like grown-ups about risk.
Build governance. Negotiate contracts like they matter. Treat privacy and de-identification as evolving exposures. Track insurance language. Invest in literacy. And assume that when something goes wrong, the question will not be “Was this AI?” The question will be “Who had responsibility to foresee, manage, and mitigate the risk?”
That is the legal reality of AI in healthcare, and it is already here.