News
Article
Author(s):
Predictive models can help find high-risk patients with asthma and manage them proactively, but prior models miss the highest-risk patients and may mislabel low-risk patients.
The best way to identify high-risk patients and effectively manage their care is using models to predict patient risk; however, predictive models can miss more than half of the true highest-risk patients and could mislabel low-risk patients as high risk. This results in suboptimal care and wasted resources.
New site-specific models can predict whether a patient with asthma would have a hospital encounter in the coming year, but gaps remain, and they do not generalize across sites and patient groups well. The results of the models, their gaps, and solutions were published in JMIR Medical Informatics.
“As the patient population [with asthma] is large, a small boost in model performance will benefit many patients and produce a large positive impact,” author Gang Luo, DPhil, of the Department of Biomedical Informatics and Medical Education at the University of Washington. “Of the top 1% patients with asthma who would incur the highest costs, for every 1% more whom one could find and enroll, one could save up to US$21 million more in asthma care every year as well as improve outcomes.”
Luo worked on creating 3 site-specific models, one each for the University of Washington Medicine (UWM), Intermountain Healthcare (IH), and Kaiser Permanente Southern California (KPSC). Previous models had an area under the receiver operating characteristic curve (AUC) of ≤ 0.79 and ≤ 49% sensitivity. These new models raised AUC to 0.9 and sensitivity to 70% for UWM, 0.86 and 54% for IH, and 0.82 and 52% for KPSC.
The first gap identified was that these site-specific models have suboptimal generalizability to be applied to other sites. The second gap was that the models have large performance gaps when applies to specific patient subgroups.
Per Luo, a new machine learning technique could address the first gap and create cross-site generalizable predictive models. Another machine learning technique could automatically raise model performance for subgroups with poor performance without sacrificing performance on other subgroups.
The proposed machine learning techniques do not depend on a specific disease, patient cohort, or health care system. “Given a new data set with a differing prediction target, disease, patient cohort, set of health care systems, or set of variables, one can use our proposed machine learning techniques to improve model generalizability across sites, as well as to boost model performance on poorly performing patient subgroups while maintaining model performance on others,” Luo wrote.
The techniques can improve model performance for outcomes such as adherence and no shows, which provides targets for resources to address these issues, such as using interventions to improve adherence or phone reminders to reduce the number of no shows.
“Our proposed predictive models are based on the OMOP [Observational Medical Outcomes Partnership] common data model and its linked standardized terminologies, which standardize administrative and clinical variables from at least 10 large health care systems in the United States,” Luo explained. “Our proposed predictive models apply to those health care systems and others using OMOP.”
Reference
Luo G. A roadmap for boosting model generalizability for predicting hospital encounters for asthma. JMIR Med Inform. 2022;10(3):e33044. doi:10.2196/33044