Publication

Article

The American Journal of Managed Care

December 2023
Volume29
Issue 12

Development and Validation of the COVID-19 Hospitalized Patient Deterioration Index

The authors developed and validated an accurate, well-calibrated, easy-to-implement COVID-19 hospitalized patient deterioration index to identify patients at high or low risk of clinical deterioration.

ABSTRACT

Objectives: To develop a COVID-19–specific deterioration index for hospitalized patients: the COVID Hospitalized Patient Deterioration Index (COVID-HDI). This index builds on the proprietary Epic Deterioration Index, which was not developed for predicting respiratory deterioration events among patients with COVID-19.

Study Design: A retrospective observational cohort was used to develop and validate the COVID-HDI model to predict respiratory deterioration or death among hospitalized patients with COVID-19. Deterioration events were defined as death or requiring high-flow oxygen, bilevel positive airway pressure, mechanical ventilation, or intensive-level care within 72 hours of run time. The sample included hospitalized patients with COVID-19 diagnoses or positive tests at Kaiser Permanente Southern California between May 3, 2020, and October 17, 2020.

Methods: Machine learning models and 118 candidate predictors were used to generate benchmark performance. Logit regression with least absolute shrinkage and selection operator and physician input were used to finalize the model. Split-sample cross-validation was used to train and test the model.

Results: The area under the receiver operating curve was 0.83. COVID-HDI identifies patients at low risk (negative predictive value [NPV] > 98.5%) and borderline low risk (NPV > 95%) of an event. Of all patients, 74% were identified as being at low or borderline low risk at some point during their hospitalization and could be considered for discharge with or without home monitoring. A high-risk group with a positive predictive value of 51% included 12% of patients. Model performance remained high in a recent cohort of patients.

Conclusions: COVID-HDI is a parsimonious, well-calibrated, and accurate model that may support clinical decision-making around discharge and escalation of care.

Am J Manag Care. 2023;29(12):e365-e371. https://doi.org/10.37765/ajmc.2023.89470

_____

Takeaway Points

The COVID-19 Hospitalized Patient Deterioration Index (COVID-HDI) is a parsimonious, well-calibrated, and high-performing model that may support resource allocation and clinical decision-making around discharge (with or without home monitoring) and escalation of care for hospitalized patients with COVID-19. The COVID-HDI:

  • builds on the Epic Deterioration Index (Epic-DI), a proprietary prediction model built into Epic electronic health record systems;
  • is an easy-to-implement regression model that includes the Epic-DI and 5 other predictors that are commonly available; and
  • predicts a composite deterioration outcome within 72 hours of run time that includes death, intensive care unit admission, intensive-level care, and intensive respiratory support (defined as intubation, high-flow oxygen, and bilevel positive airway pressure).

_____

Patients hospitalized with COVID-19—the disease caused by the SARS-CoV-2 virus—are at high risk of adverse events, which may lead to need for intensive respiratory support, the need for intensive care, or death.1,2 In the context of new COVID-19 variants driving infection rates and low uptake of booster vaccinations,3 a clinical decision support tool for early identification of patients at high and low risk of a deterioration event can support timely intervention and efficient discharge planning.

The majority of existing prediction models for COVID-19 outcomes rely on small samples or have other significant methodological challenges.4,5 Several high-performing COVID-19 outcome prediction models rely on machine learning approaches and large predictor sets.6 These models are difficult to integrate into electronic health record (EHR) systems and can only be used by health care systems that collect information on all model predictors and have advanced predictive analytic infrastructure. Other models are limited to predicting a deterioration event within a 24-hour lead time or to predicting the outcome for the entire hospitalization by making a once-and-for-all decision based on the data available on only the first hospital day.7,8

To address these limitations, we developed the COVID Hospitalized Patient Deterioration Index (COVID-HDI), a parsimonious yet high-performing regression-based model that can be implemented in most Epic-based EHR systems.8 The COVID-HDI builds on the Epic Deterioration Index (Epic-DI), a proprietary prediction model built into Epic EHR systems. The Epic-DI identifies patients at high risk of experiencing an escalation in care, defined as intensive care unit (ICU) transfer, rapid response or resuscitation team notification, or death, within 38 hours of run time.9 The Epic-DI was, however, not developed for predicting respiratory deterioration events among patients with COVID-19 and only predicts events up to 38 hours. When evaluated in the context of predicting deterioration among patients with COVID-19, the Epic-DI was able to identify small groups of low- and high-risk patients with good discrimination but low sensitivity.9

We improved on the Epic-DI by leveraging EHR data from a large integrated health care system, Kaiser Permanente Southern California (KPSC), and working closely with clinicians while following Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) standards to develop and validate a model that predicts a patient’s risk of a deterioration event, defined as either (1) requiring intensive respiratory care (high-flow oxygen, bilevel positive airway pressure [BiPAP], or mechanical ventilation), (2) ICU admission or level of support, or (3) death among hospitalized patients within 72 hours of run time.

METHODS

Setting

KPSC provides care to more than 4.7 million members across 9 Southern California counties. KPSC members can obtain their insurance through employer-sponsored plans, private plans, Medicare, and Medicaid (Medi-Cal) and other low-income programs. Its membership has been shown to be approximately representative of the population in its service region.10 We limited our analysis to patients who received care at KP hospitals and whose data are accessible through KPSC’s EHR. Among KPSC members, approximately 80% of all hospitalizations occur at KPSC hospitals (internal data).

Cohort Assembly

For the model development, we identified patients who were at least 18 years of age and hospitalized between May 3, 2020, and October 17, 2020. Data from the early weeks of the pandemic were excluded because standards for care evolved quickly during that time. COVID-19 hospitalizations were defined as those occurring within 30 days after or up to 7 days before a positive polymerase chain reaction test for SARS-CoV-2. We included hospitalizations occurring up to 7 days before a positive test to capture hospital stays that may have occurred earlier during the pandemic when testing resources were limited and hospitalized patients may have received a test during or after their hospitalization. A separate cohort of 10,990 patients hospitalized between August 1, 2021, and February 28, 2022, was used to check model performance under more recent conditions.

Outcomes

Deterioration events were defined as requiring high-flow oxygen, BiPAP, mechanical ventilation, or ICU admission or level of support. Intensive respiratory events were identified via nursing flow sheets, and intensive level of support was defined by a high nurse to patient ratio (1:1 or 1:2). Intensive level of support was preferred as an indicator of deterioration to ICU admission because it does not depend on the physical transfer to an ICU when space and other resources may be limited. Deterioration events occurring within 4 hours of arrival at the emergency department were excluded because most clinical predictors, such as laboratory values, were not yet available for these patients. Based on clinicians’ recommendations, we chose a prediction window of 72 hours to support timely decision-making.

Predictor Variables

Our initial predictor set contained 371 variables that were identified from early publications describing COVID-19 deterioration and clinician input.11-14 Predictors included demographics, vital signs, diagnoses, health care utilization in the prior 3 and 6 months, comorbidity and frailty indices, oxygen saturation as measured by pulse oximetry (SpO2), laboratory values, changes in laboratory values in the last 24 hours, and information on do-not-resuscitate and do-not-intubate (DNR/DNI) orders in the prior 12 months, which served as an indicator for prior severe illness (eAppendix Table [eAppendix available at ajmc.com]). Variables with nonzero variance and high missingness (> 30%) were removed. Among variable pairs with high correlation (> 0.8), we consulted with physicians to choose the variable with highest clinical relevance. On the day of admission, we used the first available value of each predictor within 6 hours of arrival. On any subsequent day, we chose each variable value as of 10 AM, when most morning laboratory tests had resulted. The Epic-DI is updated every 20 minutes. Our model used the highest (worst) Epic-DI score since admission or in the past 24 hours.

Analysis

We used split-sample cross-validation to train and validate the model. For model development, data from May 3, 2020, to July 4, 2020 (n = 2309 hospitalizations), were split into training (64%) and validation (36%) sets. For testing model performance on new data, data accumulated from July 5, 2020, to October 17, 2020 (n = 4545 hospitalizations), were used, and these results are reported here. To assess robustness of model performance over time, the model was also tested in a more recent cohort of patients hospitalized between August 1, 2021, and February 28, 2022. Data on candidate predictors were more limited on the admission day than on subsequent days. Patient health status was likely to change more rapidly early in the hospitalization because immediate interventions such as oxygen were likely to affect the condition of newly hospitalized patients. We began by building 2 models: the first for predicting an event within 72 hours of the admission day (admission day model) and the second for predicting an event within 72 hours of the morning of each subsequent day (day2+ model). We first utilized a nonparametric machine learning approach—random forest (RF). RF builds a collection of decision trees. Each tree splits cases and noncases into increasingly homogeneous groups by selecting at each split among a randomly chosen subset of predictors the variable that separates cases and noncases the best. The final prediction is the outcome that was most often predicted across all trees. RF flexibly accounts for potential unknown nonlinear associations, interactions, and missing values. RF further capitalizes on large numbers of predictors and serves as a benchmark for performance of a more parsimonious regression model that also imposes strong assumptions on the relationship between predictors and outcomes in the model. We used penalized regression with least absolute shrinkage and selection operator (LASSO) to reduce the number of predictors that the RF model had identified.15 In LASSO regression, the absolute value of each regression coefficient is multiplied by a penalization parameter. When the penalization parameter increases, more regression coefficients are forced toward 0, causing the corresponding predictor to be removed from the model. The study team selected the LASSO model with the fewest predictors whose performance was comparable to the performance of more complex models.15 The final models for admission day and day2+ further reduced the number of predictors and were chosen in collaboration with clinicians who, in view of implementation, helped weigh trade-offs between model performance and parsimony. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC). The AUROC is a measure to assess performance of models predicting binary outcomes. Values range from 1 for a model that has perfect discrimination to 50 for a model that does not perform better than chance.16

For implementation, a final composite model was constructed by combining variables from the admission day and day2+ models and interacting each coefficient with a 0 or 1 indicator depending on whether the prediction was within the first 24 hours of a patient’s admission or not.

RESULTS

Table 1 shows that patient characteristics in the training and validation data set (n = 2309) compared with the test data set (n = 4545) were similar overall, except for slight differences in the event rate (17% vs 15%, respectively), mean age (56 vs 58 years), and female gender (46% vs 50%). Women were less likely than men to experience an outcome in all data sets (33% vs 41%). The highest proportion of hospitalized patients with COVID-19 were of Hispanic ethnicity (61% in the testing data set). Among patients with an event, 79% experienced the event within 72 hours of admission. Patients who experienced a deterioration event had a higher Epic-DI, lower SpO2, and higher levels of oxygen delivery than patients who did not experience an event.

The RF model for the admission day model (55 variables) and the day2+ model (73 variables) yielded AUROCs of 0.84 and 0.85, respectively (Table 2). Using LASSO regression allowed us to narrow down the predictor set of the RF model from 55 to 20 variables for the admission day model and from 73 to 23 variables for the day2+ model (see eAppendix Table for the complete list of variables). Model performance remained comparable (AUROC, 0.81 and 0.85 for the admission day and day2+ models, respectively) (Table 2).

Following physician review, the final admission day model was further reduced to 3 predictors (maximum Epic-DI score in the last 24 hours, SpO2, and oxygen flow rate). The day2+ model included all variables of the admission day model and 3 additional predictors (rate change in oxygen flow rates, platelet count, and whether the patient had a DNR/DNI order on file anytime during the 12 months preceding the index hospitalization) (Table 3). The COVID-HDI is the final composite model that combined the coefficients of the admission day and day2+ models with an indicator term that denotes whether a patient is in their first 24 hours from time of admission. The respective β coefficients for the composite model are listed in Table 3.

The COVID-HDI had an AUROC of 0.83. A comparison of the AUROCs for the COVID-HDI and the Epic-DI (AUROC, 0.70) when predicting a deterioration event within 72 hours can be found in the eAppendix Figure. The COVID-HDI outperformed the Epic-DI at all probability cutoffs. The calibration plot for the COVID-HDI model shows good calibration across patient-days that fall into the low- or high-risk groups (Figure).

In collaboration with clinician partners, we identified 4 risk strata that allowed us to identify patients with low risk (≤ 2%) of a deterioration event to support decisions for discharge, a borderline-low-risk group with a risk of greater than 2% but less than 5% to allow decision support for discharge with home monitoring, and a high-risk group with at least a 50% risk of an event occurring to support considerations for escalation of care. Patients falling in a fourth group, medium risk (event risk ≥ 5% and < 50%), were not likely to be candidates for discharge or escalation of care. Table 4 demonstrates the distribution of patients who fell at some point during their hospitalization into each of the risk groups. Percentages do not sum to 100% of the patient population because patients in the high- and medium-risk groups may transition to a lower-risk group during their hospitalization. Patients in the low-risk and borderline-low-risk groups were assumed to be discharged once they fell into 1 of the 2 low-risk groups.

Of all patients, 12.2% were predicted to be at high risk at some point during their hospitalization. Of these patients, 50.7% (predicted probability of deterioration ≥ 48%) deteriorated within 72 hours of the first high-risk prediction. The low-risk group had an event rate of only 1.2%. Patients in the borderline-low-risk group had an observed 4.3% risk of a deterioration event (with a predicted probability of the outcome ≥ 2% and < 5%). In all, 74% of patients were categorized as low risk or borderline low risk at some point during their hospitalization.

DISCUSSION

We collaborated with clinician partners to develop the COVID-HDI, a high-performing and well-calibrated model that identifies hospitalized patients with COVID-19 at high and low risk of a deterioration event within 72 hours of run time. The COVID-HDI has been integrated into the KPSC EHR, where it can be accessed as part of standard clinical practice. The COVID-HDI can support decision-making about patients who may be considered for discharge and patients who may require increased monitoring.

The COVID-HDI builds on the Epic-DI and is a parsimonious yet high-performing regression model that is straightforward to implement and includes the Epic-DI and 5 other predictors that are commonly available. The model accommodates differences in predictors for outcomes occurring on admission day and any subsequent day. The Epic-DI is built into all Epic health records (although it may not be active in all EHR systems). It has been shown to have acceptable sensitivity and positive predictive value. We improved on this commonly available model though a TRIPOD standards–adherent modeling process that builds on the Epic-DI to predict COVID-19 outcomes more accurately and with a longer prediction window. When reevaluated on recent patient data, model performance proved stable.9

The COVID-HDI model is simple, parsimonious, and clinically intuitive for the most part. Predictor variables include measures of respiratory functioning, premorbid health state (DNR/DNI status), and acuity of illness (Epic-DI). The Epic-DI is a validated index of severity of illness in hospitalized patients that relies on age, vital signs, neurological status, cardiac rhythm, and selected blood count (white blood cell, platelet, and hematocrit counts) and blood chemistry (sodium, potassium, urea nitrogen, and pH) values. In health systems with Epic-based EHRs, the calculation of the Epic-DI is automated and therefore easy to implement. Although the Epic-DI also includes a term for the presence or absence of supplemental oxygen, our models show that more granular information about SpO2 provides additional prognostic information. In particular, the rate of change in SpO2 over 24 hours is an intuitive predictor of clinical improvement or worsening. Although the prognostic importance of thrombocytosis, the one other variable in the model, in patients with COVID-19 is less intuitive, one of the advantages of data-driven approaches is to allow for the consideration and predictive contribution of predictors that may be unexpected from a clinical point of view.

The COVID-HDI also improves on existing COVID-19 outcome prediction models that have methodological issues, such as small samples or no split-sample validation, or models for which methods were not transparently reported.4,17-19 A model by Razavian et al addresses these shortcomings but predicts a positive outcome over 96 hours, is limited to predicting low-risk patients for discharge, and cannot support decision-making around increased monitoring or escalation of care for patients at high risk of a deterioration event.20 Other prediction models use a single deterioration-related outcome as the prediction target, such as death or intensive care.21-23 These models are likely to miss important respiratory events that may occur when patients with COVID-19 are deteriorating. This is especially true during COVID-19 case surges, when patients may receive intensive respiratory care outside the physical ICU. Finally, some models include predictors such as imaging test results or severity of symptoms that are not available in structured fields in the EHRs of most health care systems. This limits the potential usefulness of these models.24

Our model improves on existing models in that the COVID-HDI (1) predicts a composite deterioration outcome that includes death, ICU admission, intensive-level care, and intensive respiratory support (defined as intubation, high-flow oxygen, and BiPAP); (2) uses TRIPOD standards for model development and reporting; (3) has a time horizon of 72 hours that supports timely decision-making throughout the hospitalization course; (4) provides a parsimonious model that builds on the Epic-DI with only 5 additional variables that are available in structured fields of most EHRs; and (5) identifies low-risk and high-risk patients who may be considered for discharge or escalation of care, respectively.

Limitations

Although our model has good discrimination and calibration and was tested on a new set of patients who were hospitalized during the model development process, there are important limitations. We did not test our model outside the KPSC health care system; thus, we have no information on external validity of our prediction model. COVID-19 has been evolving and so has the population it has been affecting. New variants have been shown to be more contagious and either more or less severe, and vaccination rates have increased.1 In addition, EHR tools, such as the Epic-DI, are rarely recalibrated. Future research will need to explore whether performance of the COVID-HDI changed under these new conditions.

CONCLUSIONS

We built on a validated deterioration index that is widely available via Epic EHR systems and developed a high-performing, parsimonious (and therefore easy-to-implement), portable COVID-19 deterioration risk score. This risk score may support decision-making around safe discharge and escalation of care and, with subsequent external validation, may be implemented by other health care systems that are already using the Epic-DI.

Author Affiliations: Department of Research and Evaluation, Kaiser Permanente Southern California (CN, RKB, AC, BC, ALS, MKG), Pasadena, CA; Southern California Permanente Medical Group, Los Angeles Medical Center (CWH, VKK, CS, ALS, LMM-S, JSP, AJM, RMC, SMM), Los Angeles, CA; Department of Health Systems Science, Kaiser Permanente Bernard J. Tyson School of Medicine (CN, ALS, MKG), Pasadena, CA; Southern California Permanente Medical Group, Baldwin Park Medical Center (BB), Baldwin Park, CA; Department of Biomedical Informatics and Medical Education, University of Washington, UW Medicine at South Lake Union (GL), Seattle, WA.

Source of Funding: This research was supported in part by a grant from the Regional Research Committee of Kaiser Permanente Southern California (grant No. KP-RRC-20200401). Dr Luo was partially supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award No. R01HL142503. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (CN, CWH, VKK, AC, BB, CS, ALS, LMM-S, JSP, AJM, RMC); acquisition of data (CN, VKK, AC, SMM); analysis and interpretation of data (CN, RKB, CWH, VKK, AC, BB, CS, ALS, LMM-S, JSP, AJM, RMC, SMM, GL, MKG); drafting of the manuscript (CN, VKK, BC); critical revision of the manuscript for important intellectual content (CN, RKB, CWH, VKK, BC, BB, CS, ALS, LMM-S, JSP, AJM, RMC, GL, MKG); statistical analysis (RKB, SMM, GL);obtaining funding (CN, MKG); administrative, technical, or logistic support (BC); and supervision (CN).

Address Correspondence to: Claudia Nau, PhD, Department of Health Systems Science, Kaiser Permanente Bernard J. Tyson School of Medicine, 98 S Los Robles Ave, Pasadena, CA 91101. Email: Claudia.L.Nau@kp.org.

REFERENCES

1. Richardson S, Hirsch JS, Narasimhan M, et al. Presenting characteristics, comorbidities, and outcomes among 5700 patients hospitalized with COVID-19 in the New York City area. JAMA. 2020;323(20):2052-2059. doi:10.1001/jama.2020.6775

2. Arentz M, Yim E, Klaff L, et al. Characteristics and outcomes of 21 critically ill patients with COVID-19 in Washington state. JAMA. 2020;323(16):1612-1614. doi:10.1001/jama.2020.4326

3. COVID-19 vaccinations in the United States, jurisdiction. CDC. Updated May 12, 2023. Accessed June 17, 2022. https://data.cdc.gov/Vaccinations/COVID-19-Vaccinations-in-the-United-States-Jurisdi/unsk-b7fc

4. Suh EH, Lang KJ, Zerihun LM. Modified PRIEST score for identification of very low-risk COVID patients. Am J Emerg Med. 2021;47:213-216. doi:10.1016/j.ajem.2021.04.063

5. Bradley P, Frost F, Tharmaratnam K, Wootton DG; NW Collaborative Organisation for Respiratory Research. Utility of established prognostic scores in COVID-19 hospital admissions: multicentre prospective evaluation of CURB-65, NEWS2 and qSOFA. BMJ Open Respir Res. 2020;7(1):e000729. doi:10.1136/bmjresp-2020-000729

6. Alballa N, Al-Turaiki I. Machine learning approaches in COVID-19 diagnosis, mortality, and severity risk prediction: a review. Inform Med Unlocked. 2021;24:100564. doi:10.1016/j.imu.2021.100564

7. Douville NJ, Douville CB, Mentz G, et al. Clinically applicable approach for predicting mechanical ventilation in patients with COVID-19. Br J Anaesth. 2021;126(3):578-589. doi:10.1016/j.bja.2020.11.034

8. Haimovich AD, Ravindra NG, Stoytchev S, et al. Development and validation of the quick COVID-19 severity index: a prognostic tool for early clinical decompensation. Ann Emerg Med. 2020;76(4):442-453. doi:10.1016/j.annemergmed.2020.07.022

9. Singh K, Valley TS, Tang S, et al. Evaluating a widely implemented proprietary deterioration index model among hospitalized patients with COVID-19. Ann Am Thorac Soc. 2021;18(7):1129-1137. doi:10.1513/AnnalsATS.202006-698OC

10. Koebnick C, Langer-Gould AM, Gould MK, et al. Sociodemographic characteristics of members of a large, integrated health care system: comparison with US Census Bureau data. Perm J. 2012;16(3):37-41. doi:10.7812/TPP/12-031

11. Huang C, Wang Y, Li X, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497-506. doi:10.1016/S0140-6736(20)30183-5

12. Guan WJ, Ni ZY, Hu Y, et al; China Medical Treatment Expert Group for Covid-19. Clinical characteristics of coronavirus disease 2019 in China. N Engl J Med. 2020;382(18):1708-1720. doi:10.1056/NEJMoa2002032

13. Zhou F, Yu T, Du R, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet. 2020;395(10229):1054-1062. doi:10.1016/S0140-6736(20)30566-3

14. Xu XW, Wu XX, Jiang XG, et al. Clinical findings in a group of patients infected with the 2019 novel coronavirus (SARS-Cov-2) outside of Wuhan, China: retrospective case series. BMJ. 2020;368:m606. doi:10.1136/bmj.m606

15. Kubben P, Dumontier M, Dekker A, eds. Fundamentals of Clinical Data Science. Springer Cham; 2019.

16. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143(1):29-36. doi:10.1148/radiology.143.1.7063747

17. Reps JM, Kim C, Williams RD, et al. Implementation of the COVID-19 vulnerability index across an international network of health care data sets: collaborative external validation study. JMIR Med Inform. 2021;9(4):e21547. doi:10.2196/21547

18. Yang L, Wang Q, Cui T, Huang J, Shi N, Jin H. Reporting of coronavirus disease 2019 prognostic models: the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis statement. Ann Transl Med. 2021;9(5):421. doi:10.21037/atm-20-6933

19. Wynants L, Van Calster B, Collins GS, et al. Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal. BMJ. 2020;369:m1328. doi:10.1136/bmj.m1328

20. Razavian N, Major VJ, Sudarshan M, et al. A validated, real-time prediction model for favorable outcomes in hospitalized COVID-19 patients. NPJ Digit Med. 2020;3:130. doi:10.1038/s41746-020-00343-x

21. Incerti D, Rizzo S, Li X, et al. Prognostic model to identify and quantify risk factors for mortality among hospitalised patients with COVID-19 in the USA. BMJ Open. 2021;11(4):e047121. doi:10.1136/bmjopen-2020-047121

22. Ioannou GN, Green P, Fan VS, et al. Development of COVIDVax model to estimate the risk of SARS-CoV-2-related death among 7.6 million US veterans for use in vaccination prioritization. JAMA Netw Open. 2021;4(4):e214347. doi:10.1001/jamanetworkopen.2021.4347

23. Zhao Z, Chen A, Hou W, et al. Prediction model and risk scores of ICU admission and mortality in COVID-19. PLoS One. 2020;15(7):e0236618. doi:10.1371/journal.pone.0236618

24. Fusco R, Grassi R, Granata V, et al. Artificial intelligence and COVID-19 using chest CT scan and chest x-ray images: machine learning and deep learning approaches for diagnosis and treatment. J Pers Med. 2021;11(10):993. doi:10.3390/jpm11100993

Related Videos
1 KOL is featured in this series.
1 KOL is featured in this series.
Justin Oldham, MD, MS, an expert on IPF
Mei Wei, MD, an oncologist specializing in breast cancer at Huntsman Cancer Institute at the University of Utah.
Dr Bonnie Qin
Screenshot of an interview with Ruben Mesa, MD
Justin Oldham, MD, MS, an expert on IPF
Ruben Mesa, MD
Amit Garg, MD, Northwell Health
4 KOLs are featured in this series
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo