Publication
Article
The American Journal of Managed Care
Author(s):
Health plans use data to decide on quality improvement initiatives. Having a dashboard that characterizes how equitably plans are serving their enrollees would promote health equity.
ABSTRACT
Objective: To describe a multistage process of designing and evaluating a dashboard that presents data on how equitably health plans provide care for their members.
Study Design: We designed a dashboard for presenting summative and finer-grained data to health plans for characterizing how well plans are serving individuals who belong to racial/ethnic minority groups and individuals with low income. The data presented in the dashboard were based on CMS’ Health Equity Summary Score (HESS) for Medicare Advantage plans.
Methods: Interviews and listening sessions were conducted with health plan representatives and other stakeholders to assess understanding, perceived usefulness, and interpretability of HESS data. Usability testing was conducted with individuals familiar with quality measurement and reporting to evaluate dashboard design efficiency.
Results: Listening session participants understood the purpose of the HESS and expressed a desire for this type of information. Usability testing revealed a need to improve dashboard navigability and to streamline content.
Conclusions: The HESS dashboard is a potentially useful tool for presenting data on health equity to health plans. The multistage process of continual testing and improvement used to develop the dashboard could be a model for targeting and deciding upon quality improvement efforts in the domain of health equity.
Am J Manag Care. 2023;29(3):e91-e95. https://doi.org/10.37765/ajmc.2023.89335
Takeaway Points
The Office of the Assistant Secretary for Planning and Evaluation recently recommended that, in quality reporting, CMS include measures to motivate a focus on health equity and to help prioritize quality improvement (QI) efforts.1,2 Quality dashboards are a potential means of doing so. Study findings show that the use of dashboards improves quality of care and patient safety.3-5 However, evidence is lacking on a process for presenting actionable information on health equity.
The recently developed Health Equity Summary Score (HESS) is a novel approach to health equity measurement that was designed to give Medicare Advantage (MA) contracting organizations both summative and detailed information about their performance for underserved groups in the areas of clinical care and patient experience.6-8 The HESS focuses on 2 groups that the National Academy of Medicine identified as having experienced greater social risk-related obstacles to health and high-quality health care, and for whom good data are available in current Medicare data systems: (1) individuals who belong to racial/ethnic minority groups and (2) individuals who are dually eligible for Medicare and Medicaid or who receive a Medicare low-income subsidy (LIS), each a marker of low-income status.9 The HESS incorporates information on cross-sectional performance to recognize excellent care provided to Medicare beneficiaries who are dually eligible or LIS recipients, Asian or Pacific Islander (API), Black, or Hispanic, and it accounts for within-contract and overall improvement.
This paper describes the development and initial evaluation of a dashboard for presenting HESS data to MA contracting organizations. In designing the dashboard, we used a variety of strategies proven to make data more easily comprehensible and relevant, including:
Dashboard Development Process
We first created a paper report (eAppendix A [eAppendices available at ajmc.com]) to convey individual MA contracting organizations’ performance on the HESS. This paper report was designed as a precursor to the online dashboard and contained the same content. We conducted interviews with QI staff from a small set of MA organizations and listening sessions with QI staff from a broad set of MA organizations and other stakeholders to gather feedback on the interpretability and potential usability of information in the report and to determine how organizations might use the data for QI.
Feedback from the interviews and listening sessions informed the design of the interactive online dashboard, which we built using Shiny,19 a software package for the R programming environment (R Foundation for Statistical Computing). The online environment allowed us to include features such as navigation panels, drill-down capabilities, and pop-up boxes for providing definitions and elaborations on demand while keeping pages free from clutter; it also facilitates regular updates as new quality data become available and allows for quicker dissemination.
Finally, we conducted usability tests to investigate the navigability of the online dashboard and the clarity of its content.
Description of the Dashboard
The online dashboard consists of 5 sections. The first, “Introduction and Summary of Your Performance,” presents a brief explanation of why the HESS was created, its intended use, and a summary of its major components, along with a table summarizing the contract’s performance. The second section, “Explanation of the HESS,” provides information on the clinical care and patient experience measures that are part of the HESS, the grouping factors (race and ethnicity and dual/LIS eligibility) on which the HESS is based, how scores are calculated, and how to interpret scores. The third and fourth sections mirror one another and are the major focus of the dashboard. “Your Data on the HESS for Clinical Care” presents detailed data on the contract’s performance on the HESS for Clinical Care; “Your Data on the HESS for Patient Experience” presents detailed data on the contract’s performance on the HESS for Patient Experience. The fifth section describes data sources and underlying analyses.
Numerous features were incorporated in the dashboard to aid comprehension and increase data relevance and usability, including the following:
Interviews
We conducted 1-hour interviews with QI staff from 9 MA organizations in December 2019 and January 2020. Participating organizations were selected to ensure a broad range of contract enrollment sizes, contract performance on the HESS, and geographic diversity (eAppendix D). Prior to the interview, each organization was sent a paper report containing their contract’s HESS data. Interview questions assessed contracting organizations’ understanding of the HESS and its purpose and the usefulness and interpretability of data displays.
Interviewees generally understood the purpose of the HESS and the report. One interviewee described the report as a tool “to understand our population and target certain quality initiatives and to improve quality scores within those groups.” Another said that the report “provides some health plan–specific data…to drive work to reduce disparities in health care quality.” However, interviewees also identified the need for changes to the report to improve understanding. For example, several interviewees recommended adding an executive summary for senior management and others who need at-a-glance information about a contract’s performance. These comments led to the creation of the “Introduction and Summary of Your Performance” page of the HESS dashboard. Interviewees who were more technically oriented wanted more specific information about how HESS scores are computed and how components are combined to produce the 2 HESS summary scores. Thus, we added columns to the tables in the 2 detailed data sections of the dashboard to show the steps involved in computing cross-sectional and improvement scores (an example of the former is presented in eAppendix E). Most interviewees expressed an interest in having the type of data provided in the report.
Listening Sessions
Two virtual listening sessions were held in July 2020. Invitations were sent to all MA organizations approximately 3 weeks in advance of the sessions, explaining that the purpose was to gather feedback on the interpretability and perceived usability of a generic HESS report that would be mailed to attendees prior to the session. In all, 208 people registered and 149 attended.
At each session, a slide presentation was delivered that covered the purpose and construction of the HESS. Polling questions (which used a 5-point response scale from strongly disagree to strongly agree) were interwoven throughout the presentation to gather information about the interpretability of the report’s data displays and the perceived usability of HESS data for QI. Following the slide presentation, a facilitator led a discussion to gather additional verbal feedback from the audience. Attendees also had the option of submitting questions online via the conferencing platform. Sessions lasted 1.5 to 2 hours.
Poll results suggest that attendees thought the information in the HESS report was presented in a way that they could understand. For example, 71% agreed that “the information in the report is provided in a way that I can understand”; only 9% disagreed. A majority (55%) agreed that they understood the purpose of the HESS, although a substantial minority (30%) were unsure about its purpose. A majority (53%) agreed that they could act on the information provided in the HESS report; 41% said they were unsure.
Questions raised by listening session attendees were mainly about the HESS methodology. Several attendees asked whether the methodology could be extended to include other characteristics of individuals with Medicare or other outcome measures. These questions led us to add to the dashboard a statement that the HESS is an evolving methodology and that additional characteristics of MA enrollees and outcome measures could potentially be incorporated, subject to further analysis.
Usability Testing
Eight usability tests of the HESS dashboard were conducted in April 2021. Participants—none of whom were affiliated with an MA organization—were selected because of their familiarity with health care quality measurement and reporting. Each test was conducted individually and lasted approximately 1 hour. A few days before their session, participants were sent a primer that explained the HESS and purpose of the dashboard. During the session, participants were asked to explore the dashboard as if they were a QI manager at an MA contracting organization who was trying to understand the HESS and how their contract performed. Usability tests were conducted virtually; participants shared their computer screens so the interviewer could observe how they navigated the dashboard. After participants finished exploring the dashboard on their own, the interviewer directed them to specific parts of the site and asked questions to assess understanding, ease of navigation, and the perceived usefulness of the data.
Most participants (62.5%; 5 of 8) spent about 30 to 35 minutes exploring the dashboard on their own. All found the data presented on the dashboard potentially valuable—particularly the detailed results—and well presented. All thought the dashboard contained the functionality they would expect, but most (87.5%; 7 of 8) thought navigability could be improved. Participants suggested providing more context for interpreting scores in the summary table, which led us to add the pop-up feature conveying the percentage of contracts that scored higher and lower than the focal contract on each component. Participants also suggested concrete ways to reduce text.
Discussion
We designed the HESS dashboard to provide MA contracting organizations with summary and detailed information about how they are performing for enrollees who are dually eligible/LIS recipients, API, Black, or Hispanic—groups that have historically been underserved. The dashboard and its precursor, the paper report, were designed in accordance with best principles for reporting health care quality data. Maurer et al recently published a framework for optimizing such reports that emphasized the need for content to be understandable, relevant, and timely.20 Many features included in the HESS dashboard were intended to optimize understandability and relevance. Although timeliness was not part of our design considerations, participants in our qualitative studies said that data on the HESS would be most useful if they were received soon after official measure scores are published. This suggests the need for an efficient process of updating the HESS as new data become available.
Once in operation, ongoing testing would be needed to ensure that the dashboard continues to meet MA organizations’ needs. Such testing could include objective measures of understanding of different website features to complement the mainly subjective assessments included in our evaluation. Usability testing with representatives of MA organizationswould also be critical.
Conclusions
Given the increased focus nationally on health equity, there is a need for tools that allow plans to view data on their performance for underserved populations and to track responses to QI initiatives. The dashboard described here is one such tool. Plan leaders may consider it a model for constructing similar tools for evaluating how equitably they are serving their members.
Acknowledgments
This research was supported by a contract from CMS (HHSM-500-2016-00097G) and was produced and disseminated at US taxpayer expense. The authors thank Christopher Maerzluft and Geoffrey Grimm for their assistance with programming the dashboard.
Author Affiliations: RAND Corporation, Pittsburgh, PA (SCM), and Santa Monica, CA (MM, MKB, DA, KH, DDQ, BD, MNE); National Committee for Quality Assurance (SHS, SC), Washington, DC.
Source of Funding: This research was supported by a contract from CMS (HHSM-500-2016-00097G). The views expressed in this article are those of the authors and do not necessarily reflect the views of HHS or CMS.
Author Disclosures: Dr Martino reports receiving grant funding from CMS for the research described in this article. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (SCM, DA, SHS, MNE); acquisition of data (DA, SHS, MNE); analysis and interpretation of data (SCM, MM, MKB, KH, SHS, DDQ, MNE); drafting of the manuscript (SCM, MM, MKB, DDQ, BD); critical revision of the manuscript for important intellectual content (MKB, DA, KH, SHS, SC, DDQ, BD, MNE); statistical analysis (MM, KH); obtaining funding (SCM, SHS, SC, MNE); and administrative, technical, or logistic support (SC, BD, MNE).
Address Correspondence to: Steven C. Martino, PhD, RAND Corporation, 4570 Fifth Ave, Ste 600, Pittsburgh, PA 15213-2665. Email: martino@rand.org.
REFERENCES
1. Report to Congress: social risk factors and performance under Medicare’s value-based purchasing programs. Office of the Assistant Secretary for Planning and Evaluation. December 20, 2016. Accessed May 20, 2022. https://aspe.hhs.gov/pdf-report/report-congress-social-risk-factors-and-performance-under-medicares-value-based-purchasing-programs
2. Second report to Congress on social risk and Medicare’s value-based purchasing programs. Office of the Assistant Secretary for Planning and Evaluation. June 28, 2020. Accessed May 20, 2022. https://aspe.hhs.gov/pdf-report/second-impact-report-to-congress
3. Elshehaly M, Randell R, Brehmer M, et al. QualDash: adaptable generation of visualisation dashboards for healthcare quality improvement. IEEE Trans Vis Comput Graph. 2021;27(2):689-699. doi:10.1109/TVCG.2020.3030424
4. Ivers NM, Barrett J. Using report cards and dashboards to drive quality improvement: lessons learnt and lessons still to learn. BMJ Qual Saf. 2018;27(6):417-420. doi:10.1136/bmjqs-2017-007563
5. Stadler JG, Donlon K, Siewert JD, Franken T, Lewis NE. Improving the efficiency and ease of healthcare analysis through use of data visualization dashboards. Big Data. 2016;4(2):129-135. doi:10.1089/big.2015.0059
6. Agniel D, Martino SC, Burkhart Q, et al. Incentivizing excellent care to at-risk groups with a health equity summary score. J Gen Inter Med. 2021;36(7):1847-1857. doi:10.1007/s11606-019-05473-x
7. Agniel D, Martino SC, Burkhart Q, et al. Measuring inconsistency in quality across patient groups to target quality improvement. Med Care. 2022;60(6):453-461. doi:10.1097/MLR.0000000000001712
8. Martino SC, Ahluwalia S, Harrison J, Kim A, Elliot MN. Developing health equity measures. Office of the Assistant Secretary for Planning and Evaluation. May 2021. Accessed May 20, 2022. https://aspe.hhs.gov/system/files/pdf/265566/developing-health-equity-measures.pdf
9. National Academies of Science, Engineering, and Medicine. Accounting for Social Risk Factors in Medicare Payment: Identifying Social Risk Factors. The National Academies Press; 2016. Accessed May 20, 2022.
https://nap.nationalacademies.org/catalog/21858/accounting-for-social-risk-factors-in-medicare-payment-identifying-social
10. Faber M, Bosch M, Wollersheim H, Leatherman S, Grol R. Public reporting in health care: how do consumers use quality-of-care information? a systematic review. Med Care. 2009;47(1):1-8. doi:10.1097/MLR.0b013e3181808bb5
11. Hsee CK. The evaluability hypothesis: an explanation for preference reversals between joint and separate evaluations of alternatives. Organ Behav Hum Decis Process. 1996;67(3):247-257. doi:10.1006/obhd.1996.0077
12. Zikmund-Fisher BJ. Helping people know whether measurements have good or bad implications: increasing the evaluability of health and science data communications. Policy Insights Behav Brain Sci. 2019;61(1):29-37. doi:10.1177/2372732218813377
13. Combining healthcare quality measures into composites or summary scores. Agency for Healthcare Research and Quality. Updated September 2019. Accessed May 20, 2022. https://www.ahrq.gov/talkingquality/translate/scores/combine-measures.html
14. Hibbard JH, Peters EM. Supporting informed consumer health care decisions: data presentation approaches that facilitate the use of information in choice. Annu Rev Public Health. 2003;24:413-433. doi:10.1146/annurev.publhealth.24.100901.141005
15. Hibbard JH, Peters E, Slovic P, Finucane ML, Tusler M. Making health care quality reports easier to use. Jt Comm J Qual Improv. 2001;27(11):591-604. doi:10.1016/s1070-3241(01)27051-5
16. Peters E, Dieckmann N, Dixon A, Hibbard JH, Mertz CK. Less is more in presenting quality information to consumers. Med Care Res Rev. 2007;64(2):169-190. doi:10.1177/10775587070640020301
17. American Institutes for Research. How to display comparative information that people can understand and use. Robert Wood Johnson Foundation. July 1, 2010. Accessed May 20, 2022. http://forces4quality.org/af4q/download-document/2557/Resource-HowtoDisplayComparativeInformation_7-23-10.pdf
18. Hildon Z, Allwood D, Black N. Impact of format and content of visual display data on comprehension, choice, and preference. Int J Qual Health Care. 2012;24(1):55-64. doi:10.1093/intqhc/mzr072
19. Chang W, Cheng J, Allaire JJ, et al. shiny: web application framework for R. Comprehensive R Archive Network. December 15, 2022. Accessed May 20, 2022. https://CRAN.R-project.org/package=shiny
20. Maurer M, Carman KL, Yang M, Firminger K, Hibbard J. Increasing the use of comparative quality information in maternity care: results from a randomized controlled trial. Med Care Res Rev. 2019;76(2):208-238. doi:10.1177/1077558717712290