Amy Price

The Unfinished System

Amy Price was hit by a yellow Corvette at a stop sign. Broken neck. Ruptured discs. Severe brain injury. Over four million dollars in medical bills. She was told she’d probably need to be institutionalized.

She enrolled at Open University instead, with a three-digit memory span. She made a pact with her classmates to all become doctors. When her doctoral committee disbanded due to austerity measures, her friends secretly sent her application to universities across the UK. They chose Oxford. She got in.

Today, Price is Editor-in-Chief of the Journal of Participatory Medicine, a senior researcher at Dartmouth Health, and one of the most thoughtful voices on where patients fit in the design of AI systems. On a recent episode of Practical AI in Healthcare, she sat down with Leon Rozenblit to explain why.

The knowledgeable human who cares

We talk a lot about keeping “humans in the loop” when AI is making decisions about health. Price thinks that framing is too thin. “A knowledgeable human that cares,” she corrected. “Anyone can become a knowledgeable human given the right tools.” The caring part is what makes oversight more than paperwork.

She pushed back, too, on how we talk about AI failure. Calling a model’s output a “hallucination,” she argued, is like calling a colleague’s honest mistake a character flaw. “If someone comes up with an assumption that’s just slightly off because they don’t have all the information, and you start getting all judgy about it, I’m gonna shut down.” The point isn’t that models don’t fail. They do. The point is that judgment, not curiosity, is what makes those failures harder to fix.

AI as cognitive prosthesis

Price’s own relationship with AI is shaped by her injury. Technology was a lifeline during recovery. She depended on it to compensate for the cognitive functions she’d lost. When AI tools arrived, she already had years of practice using technology as a thinking partner.

She now uses AI to research her own health conditions, then brings findings to her doctors. Her cardiologist’s response: “This is fantastic.” AI, she said, “gets to things that nobody has time to tell you.” But she was quick to qualify: the tool works because she put in the hours to learn how to use it.

That qualifier matters. Price ran an informal study with patients using AI for health questions, and the results were stark. Some got excellent information. Others got junk. The difference was skill, exposure, and mental models. “What worked best was to work with them to sharpen their skills in specific areas,” she said. Communities learning together, not individuals struggling alone.

Unfinished, not broken

The line that may define this episode: “A broken system means you throw it out. An unfinished system means there are errors all along the way. It’s a part of the process.”

Price’s point is that the frameworks for evaluating healthcare AI already exist. Cochrane reviews. Systematic review standards. Reporting guidelines. They’ve been refined over decades. What hasn’t happened is applying them to AI development. The methodology is ready. The application isn’t.

If you throw out the system, she argued, “you throw away the history of everything that will bring you greatness.”

This is an optimistic message, but it’s not naive. It’s the optimism of someone who rebuilt her own cognitive function one digit at a time, starting from a hospital bed where she was told to stop trying. The system isn’t broken. There’s still work to do.

Listen to the full conversation.