- Mount Sinai researchers published insights on new machine learning models that predict patients' coronavirus risks after they're admitted to the hospital.
- And while AI shows promise in helping providers contend with a new wave of outbreaks, developers will need to address bias in their algorithms to minimize disparities in health outcomes.
- Insider Intelligence publishes hundreds of insights, charts, and forecasts on the Digital Health industry with the Digital Health Briefing. For a limited time, you can try the Briefing for a full week for just $1!
Researchers at the New York-based health system published insights on new machine learning models they developed that predict patients' coronavirus risks three, five, seven, and 10 days after they're admitted to the hospital—including patients' risk of mortality or need for intubation. The AI models were developed using electronic health record (EHR) data from over 4,000 adult coronavirus patients in the Mount Sinai Health System.
Providers have access to more comprehensive coronavirus data to train predictive models compared with the beginning of the pandemic—which means AI is better equipped to help contend with the influx of coronavirus cases now. In the beginning of the pandemic, data on the coronavirus was pretty thin: Many health systems only had information regarding case counts and death counts, and AI tools weren't being fed high-quality of data like information surrounding care plans to help providers better contend with new cases, according to clinicians at Harvard's Biomedical Informatics Department.
Now, researchers have a slew of retrospective coronavirus data to pull from—including factors like patients' vitals and labs—to feed into algorithms and create predictive models for short-term or long-term coronavirus care. We've seen major EHR vendors roll out their own coronavirus risk prediction models, for example: Earlier this month, Epic gave its clients access to an AI risk assessment tool—developed by Cleveland Clinic—in its MyChart patient portal.
These new, validated coronavirus prediction tools should help frontline healthcare workers better mitigate coronavirus spread to some extent: As of Sunday, the US recorded more than 11 million total cases—adding 1 million new cases in just one week, per The New York Times.
But AI isn't going to be a silver bullet—some clinicians are concerned these algorithms may actually widen disparities in care if they're pulling from biased data sets. For context, many pre-existing risk assessment algorithms to guide specialty care assign different scores based on race, which can often result in members of different racial groups receiving a lower number of health services, per an August 2020 report by NEJM.
And given that Black and Latinx individuals have been disproportionately affected by the coronavirus compared with other cohorts, developers of risk assessment models will have to ensure that their AI model isn't inadvertently exacerbating these health disparities.
Experts note that one way to avoid AI bias is to tune algorithms to look at different "proxy" measures—like the number of chronic conditions instead of costs or race, for instance. And some developers are already tackling this AI barrier: Mount Sinai chose to prioritize more objective measures in its coronavirus risk assessment tool—like comorbidities and lab values—rather than using race as a proxy.
Want to read more stories like this one? Here's how you can gain access:
- Join other Insider Intelligence clients who receive this Briefing, along with other Digital Health forecasts, briefings, charts, and research reports to their inboxes each day. >> Become a Client
- Explore related topics more in depth. >> Browse Our Coverage
Are you a current Insider Intelligence client? Log in here.
Senior Care & Assisted Living Market
Medical Devices & Wearable Tech
AI in Healthcare
Remote Patient Monitoring
Source: Read Full Article