Publication

Article

The American Journal of Managed Care

November 2022
Volume28
Issue 11

Improving Risk Stratification Using AI and Social Determinants of Health

Prediction models combining claims data with social determinants of health and additional, more-timely data sources using artificial intelligence (AI) can better identify individuals with the highest future medical spending.

ABSTRACT

Objectives: To determine whether a risk prediction model using artificial intelligence (AI) to combine multiple data sources, including claims data, demographics, social determinants of health (SDOH) data, and admission, discharge, and transfer (ADT) alerts, more accurately identifies high-cost members than traditional models.

Study Design: The study used data from a Medicaid accountable care organization and included a population of 61,850 members continuously enrolled between May 2018 and April 2019.

Methods: Risk scores generated by 2 models were estimated for each member. One model, developed by Medical Home Network, used AI to analyze SDOH data, ADT activity, and claims and demographic characteristics, whereas the other model (Chronic Illness and Disability Payment System [CDPS]) relied only on demographic and claims information. To compare models, we calculated mean, median, and total spending for members with the highest 5% of AI risk scores and compared these with spending metrics for members with the highest 5% of CDPS scores. We also compared the number of members with the highest 5% of costs prospectively identified by each model as highest risk. We segmented the population by length of prior enrollment to control for varying levels of claims experience.

Results: The AI model consistently identified a higher proportion of the highest-spending members. Members deemed highest risk by the AI model also had higher spending than members deemed highest risk by the CDPS model.

Conclusions: Identification of high-cost members can be improved by using AI to combine traditional sources of data (eg, claims and demographic information) with nontraditional sources (eg, SDOH, admission alerts).

Am J Manag Care. 2022;28(11):582-587. https://doi.org/10.37765/ajmc.2022.89261

_____

Takeaway Points

Prediction models that supplement claims information with data on social determinants of health, care management information, and admission, discharge, and transfer alerts and analyze these data using artificial intelligence (AI) can identify members with high future spending more accurately than regression-based models that rely solely on claims and demographic data.

  • AI allows risk stratification programs to integrate multiple data sources into a single, actionable model.
  • More timely, dynamic, and accurate risk stratification enables allocation of care management resources to patients on whom care management can have the greatest impact.
  • Care management programs should collect and use information other than claims data for risk stratification.

_____

Attempts to lower medical costs by identifying and intervening in the care of patients with the greatest need, sometimes referred to as “hotspotting,” is a theoretically attractive approach to reducing health care spending and improving population health.1 Unfortunately, a recent evaluation of one of the most prominent hotspotting programs found that the program failed to reduce readmissions and costs.2 Evaluation results suggest that the positive effects attributed to similar care management programs may result from research designs that do not adequately control for regression to the mean, rather than the ability of these care management programs to reduce costs. This highlights a challenge that many care management programs face: Although it is easy to identify patients with high past utilization, these patients will not necessarily be the highest utilizers in the future.3-5

Prior utilization information is likely to be an important element in identifying high-cost patients, but ideally it would be combined with other data to identify the rising risk population. Collecting potentially useful data such as admission, discharge, and transfer (ADT) feeds from hospitals and integrating them into predictive modeling are problematic in practice. As a result, health plans and state Medicaid agencies have used risk prediction models primarily based on claims data.6,7 However, these claims-based actuarial models lack information on the social determinants of health (SDOH) associated with medical spending.8,9 This is the result of difficulties in collecting and integrating member-specific SDOH data into processes for risk stratification and directing care management resources, which have proved challenging for insurers and providers alike. Although many health insurers recognize the need to address SDOH and have been active in developing programs to do so,10 developing the infrastructure needed to collect individual-level SDOH data has been more challenging. Even accountable care organizations (ACOs) actively engaged in programs to address SDOH struggle to assess members’ SDOH needs and to incorporate this information into targeted interventions.11 Providers have also reported similar challenges in collecting and using SDOH data.12

In addition to lacking data on SDOH factors affecting utilization and cost, claims-based models are subject to lags between service provision and claims receipt. This results in a lack of real-time information about inpatient admissions, discharges, and transfers, as well as emergency department (ED) visits. The ability to rapidly respond to these acute medical events is an integral component of successful care management programs,13 and this response is hindered when ADT information is not included in risk stratification models.

Care management programs are limited not only by the data to which they have access but also by the models used to leverage these data. Many models fail to take advantage of recent developments in the field of artificial intelligence (AI), relying instead on basic regression techniques or relatively simple algorithms based on prior utilization of specific services.

In this article, we examine the performance of a risk prediction score developed by Medical Home Network (MHN) that is based on both traditional data elements available to managed care organizations (including demographic information and medical/pharmacy claims) and nontraditional data elements (including SDOH, ADT, and care management information). This risk score is constructed using AI. We compare its performance against the Chronic Illness and Disability Payment System (CDPS) risk score, which predicts future spend using only traditional data elements.

DATA AND METHODS

Study data were provided by an ACO delegated for care management of a Medicaid population in Cook County, Illinois. This population includes individuals eligible for Medicaid through the Affordable Care Act’s Medicaid expansion provision, Medicaid-eligible mothers and children (referred to as the “family health plan”), and individuals eligible for Medicaid because of disability (the “integrated care program”).

For this study, we analyzed spending in the 12 months between May 2018 and April 2019. We excluded individuals with any pregnancy-related spending during the study period because the risk prediction models that we examined are not calibrated to predict pregnancy-related spending and because pregnant members are referred to care management by means other than the AI model and are served by a different care management program. This resulted in exclusion of 7480 members from the sample. In addition, we excluded 11,446 individuals engaged in case management at some point during their enrollment in the ACO. We chose to exclude these members because case management efforts could affect the spending that our risk scoring models attempted to predict. Finally, we excluded 76,911 members not continuously enrolled during the study period and 221 members with missing risk score information. Our final sample consisted of 61,850 individuals continuously enrolled over 12 months. Table 1 provides details on how our sample was derived.

Risk Models

We compared 2 models used to assess risk for members. The first, CDPS, is designed to predict risk for Medicaid populations14 and is used by many state Medicaid agencies to set rates for Medicaid managed care plans. CDPS is a regression-based model that relies on medical and pharmacy claims history to predict spending based on diagnoses and demographic factors. The CDPS model includes clusters of diagnoses defined by a combination of empirical analysis and clinical judgment.

The second risk model, referred to here as the AI model, uses the same demographic variables (including member age) and medical and pharmacy claims data as the CDPS model but adds several additional data sources, including SDOH data, ADT data, and data provided by care managers. SDOH information is collected using a proprietary member survey known as the Health Risk Assessment (HRA). The HRA collects information on the most common chronic illnesses with potential for care management impact, recent inpatient or ED utilizations, and SDOH-related barriers to treatment adherence. The information collected by the HRA is listed in the eAppendix (available at ajmc.com). Surveys are administered to most members (approximately 85%) within 60 days of plan enrollment and repeated based on risk level or on triggered events, such as member request or a sudden increase in utilization. The HRA data are fed into the AI model, along with medical claims, pharmacy claims, care management, and other administrative data, to develop an individual AI risk score for each patient. The AI model then employs a machine learning regression model to predict the total cost of members’ medical claims. The training data for the model contain historical member data using an 80/20 split to define training and testing samples. Once an ideal set of hyperparameters is identified, the entire training set is used to create the final version of the model and tested on the withheld testing sample.

We calculated predictions from each risk model using data available as of May 2018, the beginning of the study period, before cost measurement began. It is possible that the models’ relative performances may differ for individuals with different lengths of claims experience. Because the CDPS model is primarily claims based, it may be less able than the AI model to identify costly beneficiaries without an extensive claims history, as the AI model includes both claims and nonclaims data. However, the inclusion of nonclaims data may be less important as a member’s claims history grows. To account for this possibility, we stratified our sample into 3 different categories based on prior claims experience as of the beginning of the study period. We estimated model performance separately for members with 0 to 3 months of prior enrollment, members with 4 to 12 months of prior enrollment, and members with more than 12 months of prior enrollment.

Analysis and Outcome Measures

Our goal was to assess how successful each model is at identifying members with the top 5% of spending (although we varied the 5% definition of “high-spending” in sensitivity analyses because different care management organizations’ definitions of high-spending members targeted for intervention may vary based on the care management resources an organization has available). Spending was defined as total medical and pharmacy spending. We chose to examine the models’ ability to identify members in the top 5% of spending rather than to accurately estimate the exact dollar amount of a member’s spending because identifying high-spending members is a more important step in allocating care management resources than identifying exact member spending amounts. Using methods similar to those of prior studies,15 we compared risk prediction models by calculating mean, median, and total spending for members with the highest 5% of risk scores at the beginning of the study period. We then compared these amounts against actual spending for members with the highest 5% of costs from May 2018 to April 2019. Risk prediction models that successfully identify high-cost members will identify more total and median spending. As an additional measure of each model’s performance, we found the percentage of members in the highest 5% of spending who were also in the highest 5% of risk scores.

RESULTS

Sample Characteristics

Table 2 shows the characteristics of our population. The mean CDPS risk score was 0.952, with a range of 0.072 to 41.77. Similarly, the mean AI risk score was 42.07, ranging from 0 to 100. (The AI risk score is based on predictive costs for all members transformed to a 0-100 ranked scale.) Our sample includes 29,119 members with more than 12 months of prior enrollment at the start of the study period and 30,970 members with between 4 and 12 months of prior enrollment. Only 1761 members in the group had fewer than 4 months of prior enrollment. Consistent with the Medicaid population in general, our sample is relatively young, with a mean age of 21.9 years, and is 56% female. Most members (79.1%) are Medicaid-eligible parents and children, 15.7% of members are from the Affordable Care Act expansion population, and 5.2% are Medicaid eligible because of a disability. Mean annual spending was $2070 per member, although spending variance was high, with an SD of $9904.

Spending Comparisons

Table 3 compares spending measures for the 5% of members with the highest costs with those for the groups of members with (1) the highest 5% of AI risk scores and (2) the highest 5% of CDPS risk scores. Results are stratified by the number of months that members were enrolled before the predictive risk scores were computed. For members with the longest claims history (those with more than 12 months of prior enrollment), the top 5% of members by spending (n = 1456 members) incurred a total of $38,543,492 in costs (mean, $26,472; median, $16,078). Members with the highest 5% of AI risk scores incurred a total cost of $20,892,684 during the study period (mean, $14,349 per member; median, $7265). Forty-one percent of members with the highest AI risk scores were also in the group of members with the highest spend. By way of comparison, mean and median costs for the group of members with the highest 5% of CDPS risk scores were lower ($11,808 and $3753, respectively), and only 29% of members with the highest CDPS scores were in the highest spending group. The AI model’s high-risk group therefore had higher total and mean and median spending than the CDPS model’s high-risk group, and the AI model identified a larger number of the highest-cost members than the CDPS model did.

Furthermore, the AI model identified 41% more of the highest spending members (175 additional members) than the CDPS model, and these members had $3.7 million more in total spending ($20,892,684 compared with $17,192,192). As a result, assigning care managers based on AI risk scores gives managed care organizations the chance to focus care management resources on a higher portion of high-cost members than if they risk stratified using CDPS scores.

It was initially expected that the AI model might outperform the CDPS model among the population of members with the fewest months of prior enrollment (and hence the least claims history). However, as shown in Table 3, AI identified more members with the highest 5% of actual spending and identified members with higher mean, median, and total spending than did CDPS, across all prior enrollment groups.

Sensitivity Analyses

Sensitivity analyses tested whether results were robust to our definition of high-cost members (ie, members with the highest 5% of spending), our exclusion of case-managed enrollees, and focus on total spending as opposed to medical spending alone. Primary results defined the high-cost group as those with the highest 5% of spending, and we varied this definition of high-cost, setting it at the top 1% and then the top 3% of members by cost. Similarly, sensitivity analyses were conducted including the 11,446 members who participated in case management. These analyses were repeated using medical spending as an outcome (as opposed to medical and pharmacy spending). In all cases, the results were similar to those presented above. These results are available from the authors upon request.

DISCUSSION

Many payers and delivery systems engage in care management efforts to reduce medical spending by identifying patients likely to incur high costs, then intervening to reduce preventable spending. Unfortunately, despite many available predictive models, identification of members with the highest future spending remains challenging. Our results suggest that this is due in part to the heavy reliance of these models on demographic and claims data and their inability to incorporate other sources of data.

Identifying preventable spending may require identifying patients with rapidly rising risk scores, not just patients whose risk scores are already high. In fact, the ACO studied is already targeting members with rapidly rising risk scores for care management. Understanding how to best incorporate risk score changes into risk stratification efforts is an opportunity for future research. However, to the extent that this type of risk trajectory analysis can improve risk stratification, plans without the infrastructure to combine data sources in real time will miss care management opportunities.

Some care management organizations struggle to construct data infrastructure and to create the processes that are necessary to collect nonclaims data, analyze data from multiple sources, and use these to deploy care management resources. This is not surprising, as all aspects of this process are challenging. Collecting SDOH information is not a straightforward process, and the industry has not adopted standard instruments for collecting these data. Unlike SDOH information, ADT information is available in a standardized format, but communities often lack infrastructure to communicate this information among hospitals, insurers, and care management programs. Although stakeholders in some areas of the country have collaborated to establish health information exchanges to facilitate transmission of patient data among unaffiliated providers, these exchanges are still the exception rather than the rule. As a result, many care management programs must rely on claims data that are often at least 3 months old due to the lag between service delivery and claims processing. Finally, even organizations that have managed to collect and store data from multiple disparate sources might lack the expertise to analyze the data in a way that informs care management efforts. Alternatively, some plans might be able to combine these data but still rely on older analytical techniques that lack the predictive improvements that more recent AI techniques offer. Even plans that successfully incorporate multiple data sets into AI models may face challenges making these data usable to care managers. The organization providing data for this study identifies factors contributing to a member’s high risk score so that case managers can better understand the characteristics that make a member high risk.

Ultimately, a care management program’s ability to affect utilization and cost will depend on the interventions the program makes. However, even the best interventions are unlikely to be successful if they are targeted toward low-cost patients with little potential benefit. These results underscore the importance of care management programs’ investment in improved data infrastructure and analysis.

Limitations

Our analyses are limited in several ways. First, our data come from a single ACO operating in a single geographic area and serving a Medicaid population more likely to face SDOH-related barriers than members of a commercially insured population. For instance, of the SDOH factors incorporated in the AI model, some of the most highly correlated with cost were needs related to food, clothing, or housing and self-reported health rating of fair or poor.8 These results may not generalize to other ACOs or health plans whose member populations differ significantly. However, even if this is the case, many care management programs serve similar populations to whom these results are likely to generalize. Second, our data are drawn from a single 12-month period, May 2018 through April 2019. We cannot identify any unique events occurring during this period that would make the AI model’s predictions more accurate than they would be during other time periods; however, it remains possible that results from other time periods could vary. Third, although the AI model was relatively successful at identifying high-cost individuals, it was still unable to identify roughly half of the high-cost members. Part of this challenge is driven by the random nature of some health spending. However, MHN is currently working to add other data to the model, particularly clinical laboratory data, to improve prediction. Finally, our analyses focus on identifying practices that can improve the ability to identify members who will incur high costs. Identification is the first step in creating programs to control medical spending, but the ability to identify high-cost members does not ensure cost reductions. Ultimately, the effect on spending depends on the effectiveness of the care management program, and successful cost reduction is by no means assured. Several prominent care management programs have failed to demonstrate changes in utilization or cost outcomes,2,16 although others report greater success.13,17,18 To address this issue, the care manager for the ACO being studied is developing disease-specific interventions to target conditions that are both highly prevalent and are associated with high costs in the AI model.

CONCLUSIONS

Many care management programs have limited sources of data on their members, and many use regression-based methods to identify members at risk of high spending. Our results suggest that a model developed using AI and analyzing data that include claims, demographics, SDOH, and ADT information can more successfully identify high-cost members than a model based on claims and demographic data alone. We suggest that care management programs can better target their interventions by investing in the infrastructure necessary to collect, store, and update new data sources and in the expertise to combine these data using advanced analytic methods.

Author Affiliations: College of Health Professions, Virginia Commonwealth University (NWC), Richmond, VA; Medical Home Network (AJ, TB, CL, KS, TP), Chicago, IL.

Source of Funding: None.

Author Disclosures: Dr Jones is a board member and employee of the nonprofit Medical Home Network. Mr Burkard is an employee of Medical Home Network. Ms Lulias is a board member of MoreCare and Medical Home Network, has consulted for MoreCare, is an employee of Medical Home Network, and owns stock in MoreCare. Ms Posa is an employee of Medical Home Network. Dr Carroll and Ms Severson report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (NWC, AJ, TB, CL, TP); acquisition of data (NWC, TB, CL, KS); analysis and interpretation of data (NWC, AJ, TB, KS, TP); drafting of the manuscript (NWC, TB, CL, TP); critical revision of the manuscript for important intellectual content (NWC, TP); statistical analysis (NWC, TB, KS); obtaining funding (CL); administrative, technical, or logistic support (TB, CL, TP); and supervision (AJ, TB, CL, TP).

Address Correspondence to: Nathan W. Carroll, PhD, College of Health Professions, Virginia Commonwealth University, 900 E Leigh St, Richmond, VA 23298. Email: carrolln@vcu.edu.

REFERENCES

1. Gawande A. The hot spotters. The New Yorker. January 16, 2011. Accessed January 12, 2021. https://www.newyorker.com/magazine/2011/01/24/the-hot-spotters

2. Finkelstein A, Zhou A, Taubman S, Doyle J. Health care hotspotting—a randomized, controlled trial. N Engl J Med. 2020;382(2):152-162. doi:10.1056/NEJMsa1906848

3. Figueroa JF, Lyon Z, Zhou X, Grabowski DC, Jha AK. Persistence and drivers of high-cost status among dual-eligible Medicare and Medicaid beneficiaries: an observational study. Ann Intern Med. 2018;169(8):528-534. doi:10.7326/M18-0085

4. Figueroa JF, Zhou X, Jha AK. Characteristics and spending patterns of persistently high-cost Medicare patients. Health Aff (Millwood). 2019;38(1):107-114. doi:10.1377/hlthaff.2018.05160

5. Johnson TL, Rinehart DJ, Durfee J, et al. For many patients who use large amounts of health care services, the need is intense yet temporary. Health Aff (Millwood). 2015;34(8):1312-1319. doi:10.1377/hlthaff.2014.1186

6. Hileman G, Steele S. Accuracy of Claims-Based Risk Scoring Models. Society of Actuaries; 2016. Accessed November 12, 2019. https://www.soa.org/globalassets/assets/Files/Research/research-2016-accuracy-claims-based-risk-scoring-models.pdf

7. Courtot B, Coughlin TA, Lawton EA. Medicaid and CHIP managed care payment methods and spending in 20 states. Office of the Assistant Secretary for Planning and Evaluation. December 2012. Accessed November 12, 2019. https://aspe.hhs.gov/sites/default/files/migrated_legacy_files//43966/rpt.pdf

8. Jones A, Lemak CH, Lulias C, Burkard T, McDowell B, Severson K. Predictive value of screening for addressable social risk factors. HSOA J Community Med Public Health Care. 2017;4(30). doi:10.24966/CMPH-1978/100030

9. Chen S, Bergman D, Miller K, Kavanagh A, Frownfelter J, Showalter J. Using applied machine learning to predict healthcare utilization based on socioeconomic determinants of care. Am J Manag Care. 2020;26(1):26-31. doi:10.37765/ajmc.2020.42142

10. Berry K. How health insurance providers are tackling social barriers to health. Am J Accountable Care. 2019;7(4):19-21.

11. Murray GF, Rodriguez HP, Lewis VA. Upstream with a small paddle: how ACOs are working against the current to meet patients’ social needs. Health Aff (Millwood). 2020;39(2):199-206. doi:10.1377/hlthaff.2019.01266

12. Fraze TK, Brewster AL, Lewis VA, Beidler LB, Murray GF, Colla CH. Prevalence of screening for food insecurity, housing instability, utility needs, transportation needs, and interpersonal violence by US physician practices and hospitals. JAMA Netw Open. 2019;2(9):e1911514. doi:10.1001/jamanetworkopen.2019.11514

13. Hong CS, Siegel AL, Ferris TG. Caring for high-need, high-cost patients: what makes for a successful care management program? The Commonwealth Fund. August 7, 2014. Accessed November 12, 2019.
https://www.commonwealthfund.org/publications/issue-briefs/2014/aug/caring-high-need-high-cost-patients-what-makes-successful-care

14. Kronick R, Gilmer T, Dreyfus T, Lee L. Improving health-based payment for Medicaid beneficiaries: CDPS. Health Care Financ Rev. 2000;21(3):29-64.

15. Robst J. Comparing methods for identifying future high-cost mental health cases in Medicaid. Value Health. 2012;15(1):198-203. doi:10.1016/j.jval.2011.08.007

16. Peikes D, Chen A, Schore J, Brown R. Effects of care coordination on hospitalization, quality of care, and health care expenditures among Medicare beneficiaries: 15 randomized trials. JAMA. 2009;301(6):603-618. doi:10.1001/jama.2009.126

17. Kumar GS, Klein R. Effectiveness of case management strategies in reducing emergency department visits in frequent user patient populations: a systematic review. J Emerg Med. 2013;44(3):717-729. doi:10.1016/j.jemermed.2012.08.035

18. Holahan J, Schoen C, McMorrow S. The potential savings from enhanced chronic care management policies. Urban Institute. November 30, 2011. Accessed November 12, 2019. https://www.urban.org/sites/default/files/alfresco/publication-pdfs/412453-The-Potential-Savings-from-Enhanced-Chronic-Care-Management-Policies.pdf

Related Videos
Masanori Aikawa, MD
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Benjamin Scirica, MD, MPH, associate professor of medicine at Harvard Medical School and director of quality initiatives at Brigham and Women’s Hospital’s Cardiovascular Division
Glenn Balasky during a video interview
dr joseph alvarnas
Michael Lynch, MD, UPMC
dr alex jahangir
Fahad Tahir, MAS, MBA, FACHE, Ascension St Thomas
Leland Metheny, MD, University Hospitals Seidman Cancer Center
Andrew Cournoyer
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo