When AI Meets Real Patients: Inside Sanofi’s AI-Powered Clinical Development Pipeline
In Part 1 of our conversation with Matt Truppo, the story was about AI doing hard science: finding drug targets, designing molecules, engineering proteins. The risks were manageable. “The risk of letting the AI take on some decisions autonomously is very, very low,” Truppo told us, describing the preclinical side.
Part 2 is a different conversation. The molecules have reached patients. The stakes have changed.
Digital patient twins: three proof points
Truppo leads with numbers that frame the whole episode. Ninety percent of drugs fail during clinical development, the longest and most expensive phase of R&D. “If we can reduce attrition by even 10%, that would effectively be a doubling of our industry’s productivity.”
His team’s approach: digital patient twins built on quantitative systems pharmacology (QSP). These are mechanistic simulations, grounded in differential equations and known disease biology, not black-box AI. And they have real results.
Dupixent, Sanofi’s largest product, needed expansion into adolescent patients. QSP models simulated adolescent response from adult clinical data, and the FDA accepted the modeling in lieu of a dedicated pediatric trial. Reduced cost, shorter timelines, fewer children exposed to experimental therapy.
Xenpozyme, for a fatal rare disease (acid sphingomyelinase deficiency), used the same approach. The FDA highlighted it as an exemplary application of QSP modeling and digital patient twins, inviting Sanofi to present at a workshop on advancing these methods across the industry.
A multispecific antibody candidate for asthma tested the digital twin concept most aggressively. Truppo’s team predicted Phase 1B biomarker results (eosinophils, IgE, fractional exhaled nitric oxide) from the digital twin models. The predictions matched closely enough that Sanofi went directly to Phase 2B, skipping the dose-finding Phase 2A study entirely.
But Truppo was equally direct about the limits. These models work when you have a good map of the disease’s pathophysiology and some prior human data. For first-in-class targets, where no prior human efficacy data exists, “that’s the big gap.” His team is working with the Biomes Institute in Heidelberg on next-generation approaches using organoids and AI-driven causal disease modeling, but “it’s not built yet.”
The Clinical Control Tower
On the operations side, Sanofi built what they call a Clinical Control Tower: a unified dashboard that replaced over 15 legacy clinical trial management applications and now serves over 4,000 users, including via mobile.
We pushed Truppo on this. “A lot of what you just said has been done in the absence of AI,” Steve noted. “What’s the new thing here?”
Truppo’s answer was measured. Much of the value comes from data consolidation. But the AI layer adds risk flagging that incorporates factors like geopolitical shipping lane disruptions into clinical supply chain forecasting. It connects trial recruitment trajectories to readout timelines to portfolio-level prioritization. “The AI piece is really about bringing all of those together so that we can start to look at it on a program and portfolio-wide level,” he said.
GenAI in the filing room
Regulatory document generation is where large language models dominate Sanofi’s pipeline. Clinical study reports, which can take 17 weeks each and number in the hundreds annually, have seen a 35% time reduction so far. Truppo’s target for 2026 is 70%. Eighty percent of AI-generated first drafts already meet submission quality standards.
The approach: fine-tune off-the-shelf LLMs on Sanofi’s specific data, writing conventions, and trial structures. Not every section of the Common Technical Document is covered yet. “We’re taking one bite at a time,” Truppo said.
He also described using AI to draft responses to regulatory agency questions, pulling relevant data faster and producing more complete initial responses. “You could imagine a world where eventually regulators’ AI is talking to pharma companies’ AI,” he observed, before adding that human experts review and own every response.
The executive who experimented on himself
Then the conversation shifted register. Truppo described a personal project that started as a summer challenge: could he cut his own executive workday in half using AI?
He started with a single “super agent to rule them all.” It failed miserably. Too general to be useful in any specific domain. So he broke his actual job description into components and built specialized agents for each one: morning prep, one-on-one meetings, email drafting (trained on 2,000+ of his own emails), scientific strategy support, and a governance co-assistant.
The governance agent was the most striking. During live decision meetings, Truppo chats with it in real time. It can surface cross-program impacts of a governance decision: “If you pre-vet this program in CMC, here are the programs that would have to slow down.” That cross-cutting analysis, he noted, “is something you often miss in a project governance meeting.”
The outcome: 30% time savings, measured by AI analysis of his calendar six months before and after deployment. Not the 50% he aimed for, but rigorously quantified.
He also built “Digital Matt,” an AI avatar trained on his voice and appearance, which gave three speaking engagements while real Matt spoke elsewhere. “It started as a bit of a joke,” he said, “but it is, in a weird way, a preview of what the future could be.”
The human line
Throughout both episodes, one principle held firm: “Every decision we make about putting a patient on drug, about what the dose is, about how we interpret the data, that has to firmly remain in the hands of expert human beings. These are all tools to help us make better decisions, but they do not replace the expert in the room.”
Truppo’s five-year vision is agentic orchestration across the entire R&D pipeline, with AI handling the connective tissue between functions while humans retain every decision that touches patients. Whether the industry gets there depends on solving the data integration, explainability, and regulatory alignment challenges he was candid about. But the early proof points, from Dupixent to Digital Matt, suggest the trajectory is real.
🎙️ Listen to the full conversation
You Might Also Enjoy
- AI-Powered Clinical Trial Matching with Adam Blum — Precision matching across 84+ eligibility attributes, tackling the same trial enrollment bottlenecks that Sanofi’s Clinical Control Tower addresses from the pharma side.
- Patient-Authorized Real-World Data for Biopharma with Shashi Shankar — The foundational data layer that digital patient twins depend on: how longitudinal patient data gets collected, consented, and consolidated across sites.
- 50 Years of Clinical AI with Edward Shortliffe — The explainability debate that runs through Truppo’s episode has a 50-year history. Shortliffe’s perspective on where the field has been and where it’s headed adds essential context.