In the early days of the Covid-19 pandemic, hospitals were desperate for ways to manage the flood of seriously ill patients. Many turned to an artificial intelligence algorithm developed by Epic Systems, the electronic health record company, to predict which patients were most likely to rapidly deteriorate so they could get the critical care they needed.
Then, and now, many health systems have implemented this kind of proprietary AI algorithm without a clear sense of how well they perform. But in a rare head-to-head analysis, Yale’s health system evaluated the statistical performance of six early warning scores on the same clinical data from seven of its hospitals, publishing its results Tuesday in JAMA Network Open. It revealed that some clinical AI models aren’t all they’re cracked up to be.
“We didn’t set out to write a paper,” said co-author Deborah Rhodes, chief quality officer for Yale New Haven Health System and associate dean of quality for Yale School of Medicine. “We set out to find the best tool.” Across the country, Epic’s early warning score is widely used because it comes built into the company’s electronic health record. “My health system really wanted to go with the tool that was free,” said Rhodes.
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
View All Plans