Publication

Article

The American Journal of Managed Care

May 2019
Volume25
Issue 5

Producing Comparable Cost and Quality Results From All-Payer Claims Databases

This paper describes a replicable process for standardizing disparate databases and methods to calculate cost and quality measures within and across states.

ABSTRACT

Objectives: To describe how all-payer claims databases (APCDs) can be used for multistate analysis, evaluating the feasibility of overcoming the common barrier of a lack of standardization across data sets to produce comparable cost and quality results for 4 states. This study is part of a larger project to better understand the cost and quality of healthcare services across delivery organizations.

Study Design: Descriptive account of the process followed to produce healthcare quality and cost measures across and within 4 regional APCDs.

Methods: Partners from Colorado, Massachusetts, Oregon, and Utah standardized the calculations for a set of cost and quality measures using 2014 commercial claims data collected in each state. This work required a detailed understanding of the data sets, collaborative relationships with each other and local partners, and broad standardization. Partners standardized rules for including payers, data set elements, measure specifications, SAS code, and adjustments for population differences in age and gender.

Results: This study resulted in the development of a Uniform Data Structure file format that can be scaled across populations, measures, and research dimensions to provide a consistent method to produce comparable findings.

Conclusions: This study demonstrates the feasibility of using state-based claims data sets and standardized processes to develop comparable healthcare performance measures that inform state, regional, and organizational healthcare policy.

Am J Manag Care. 2019;25(5):e138-e144Takeaway Points

This study’s results demonstrate the feasibility of assessing healthcare performance within and across states using rich data sources.

  • State-level claims data sets can be standardized to support the development and measurement of comparable metrics to assess performance within and across states.
  • The development of the Uniform Data Structure file format led to the success of the project and can be scaled across populations, measures, and research dimensions.
  • Building relationships among contributors, administrators, and users can increase the likelihood that all-payer claims databases can be leveraged to improve value in healthcare.

In the absence of robust clinical registries, administrative claims represent an important source of information about healthcare delivery in the United States; this is especially true for commercially insured populations for whom public databases are unavailable.1 Claims data sets are relatively inexpensive to develop and span across time and healthcare settings.2

All-payer claims databases (APCDs) systematically collect healthcare claims data, such as medical, pharmacy, eligibility, and provider data, from several payer sources.3 Through a variety of use cases, these data sets promote transparency and, therefore, help to inform policy development, quality improvement, public health, healthcare services research, and consumer choice.4,5 With liberalized data use policies, APCDs could support a variety of stakeholder efforts to obtain a clearer picture of healthcare cost, quality, and utilization across states or regions.3

Although APCDs and multipayer claims databases (which we refer to collectively as APCDs) are rich healthcare data sources, the opportunity to leverage them for cross-state analysis has only been realized through multistate collaborations.6 Furthermore, regional APCDs provide more than just data; the organizations that administer databases bring connections with local stakeholders, including health plans, providers, employers, state policy makers, and consumers, who provide context to the data and offer a forum in which to test assumptions and generate hypotheses. Combining these rich data sources with those insights is likely to increase the value of research conducted using APCDs. Using the APCDs to engage key stakeholders in the analytical process may also increase their interest in the findings and pave the way from dissemination to action.7

As researchers explore the use of APCDs for multistate analysis, the lack of standardization across those data sets frequently emerges as a potential barrier.5 In this paper, we report a method that can be used to overcome this lack of standardization.

The Network for Regional Healthcare Improvement and 4 of its Regional Health Improvement Collaborative (RHIC) members in Colorado, Massachusetts, Oregon, and Utah partnered with the National Bureau of Economic Research (NBER) and Harvard University in the Comparative Health System Performance Initiative Study funded by the Agency for Healthcare Research and Quality (AHRQ).8 AHRQ funded 3 Centers of Excellence to study how healthcare systems promote evidence-based practices in delivering care. The work described in this paper is an output of Project 2, a subset of projects being facilitated through the NBER Center of Excellence. The goal of Project 2 is to better understand the cost and quality of healthcare services across delivery organizations.

In support of Project 2’s aims, this paper describes the steps used by the 4 state partners to develop standardized data sets, produce comparable cost and quality measurement, and share a path forward for others. The methods described test the feasibility of this approach by producing comparable data sets that can be used in more comprehensive future studies. To our knowledge, this is the first time that regional APCDs have been used to comparatively study quality measures across states.5

METHODS

This paper provides an account of the process followed to produce descriptive healthcare quality and cost measures across and within states using commercial claims data from regional APCDs.

Data Sources

Commercial APCD data were used from Colorado, Massachusetts, Oregon, and Utah for calendar year 2014. APCDs consist of submissions from payers of member eligibility, healthcare service claims, and provider information for a population of members (Table 1).

Measures

Healthcare quality. Five NCQA/Healthcare Effectiveness Data and Information Set (HEDIS) quality process measures9 were selected for the analysis: Adolescent Well-Care Visits10; Chlamydia Screening11; Avoidance of Antibiotic Treatment in Adults With Acute Bronchitis12; Follow-up Care for Children Prescribed Attention-Deficit/Hyperactivity Disorder Medications, initiation and maintenance phases13; and Antidepressant Medication Management, acute and continuation phases14; along with 1 Oregon Health and Science University measure—Developmental Screening in the First Three Years of Life.15 In addition, 2 Prevention Quality Indicators were selected as indicators of effective ambulatory care: Hospital Admissions for Ambulatory Sensitive Conditions,16 acute composite17 and chronic composite18 (eAppendix [available at ajmc.com]). Measure selection criteria included ability to calculate the measure using claims data only, demonstration of a high coefficient of variation (maximum variability) within and across the 4 states, priority for states’ healthcare performance improvement initiatives, and relevance to adult and pediatric populations.

Healthcare cost. The cost of healthcare services delivered per member per month (PMPM) was computed from allowed payments reported on claims and adjusted for age and gender. The researchers acknowledge that adjustment for additional risk factors, like presence of comorbidities, would be necessary to assess healthcare performance within and across states’ populations. However, for the purpose of this study, it was determined that cost adjusted for age and gender alone was sufficient.

In each state, unadjusted PMPM was computed as the total allowed amount for 2014 divided by the total number of eligible member months during the same calendar year. To compute case-mix adjustment factors, each state produced tables of medical and pharmacy cost for age/gender cells, with age groups defined in 5-year increments. The cost for the overall population was also calculated and then used to estimate an adjustment factor. The adjustment factor was the ratio result of dividing the cost for each age/gender cell by the overall cost. Raw average cost divided by the adjustment factor yielded age/gender-adjusted cost.

Geographic designation. CMS divides counties into 5 types: large metro, metro, micro, rural, and counties with extreme access considerations (CEAC). Three designations were used for this project: (1) large metro, (2) metro, and (3) a combination of micro, rural, and CEAC counties, hereafter referenced as “rural.”19 Patients were assigned to 1 of the 3 geographic types based on their county of residence.

Procedures

Our preliminary analysis of APCD comparability and data quality across the 4 states showed that available fields, data definitions, and completeness and accuracy of claims data varied. Based on this assessment, we took several steps to ensure that the claims-based measures produced from the states’ databases were comparable. All data decisions prioritized measure requirements and specifications when addressing unique characteristics of the participant APCDs and standardizing the database. An external technical advisor guided the entire process. Figure 1 schematically displays the process used to produce cost and quality measures for the 4 states. The steps are described as follows:

1. Sample exclusions and minimum data requirements. Only payers with complete information on the data elements needed to generate the quality and cost measures of interest were included in the Uniform Data Structure (UDS) described later. First, members with plans that do not provide comprehensive coverage (eg, supplemental, limited liability, specific service [behavioral, vision, dental only, and student] plans) were identified through type-of-coverage fields and excluded. Within each data contributor, the stability of submissions was determined by assessing the allowed amount PMPM across the 12 months observed. Members with coverage, but no claims for services, were included to provide appropriate denominators for some of the measures as required by HEDIS or AHRQ specifications. Second, the completion of the following data fields needed for the correct calculation of the quality measures was assessed: diagnosis-related groups; procedure codes from the International Classification of Diseases, Ninth Revision and Tenth Revision; admission type and source; Current Procedural Terminology/Healthcare Common Procedure Coding System codes on outpatient claims; place-of-service codes; facility diagnosis and Present on Admission Indicators codes; and servicing National Provider Identifier.

2. UDS. A UDS file format was created to streamline the common calculation of measure results and minimize the data storage space required. The UDS contained 8 relational tables with all the necessary data fields for the measure set chosen for this project. The tables included in the UDS were member eligibility, professional procedures, professional diagnoses, facility header, facility detail, facility surgical procedures, facility diagnoses, and pharmacy claims. The code for measures included in this paper was generated using SAS software (SAS Institute, Inc; Cary, North Carolina).

3. Provider specialty mapping. CMS’ 2-digit specialty code20 was used as a common data source to identify and standardize provider specialty for attribution to primary care providers (PCPs) and the production of numerators for quality measures.

4. Attribution of patients to providers. Each patient was attributed to the PCP that the patient saw for the most evaluation and management visits. The first attribution step assessed patients’ claims in the measurement period (2014). If no PCP could be found, the second attribution step assessed patients’ claims in the prior year.

5. Attribution of patients to geographic regions. CMS county type designations were used to classify patients in large metro, metro, and rural areas, using their most recent zip code of residence.19

6. Measure codes and execution. Each state partner wrote programs using SAS software for 2 of the 8 quality measures identified for the project. The code for each measure was reviewed and tested by an external technical advisor and, once approved, was then shared among the states. The final product was a uniform, validated SAS program for each of the measures. States used the common SAS code to calculate measures on their UDS.

7. Corroboration of final results. Each state partner checked results against reported values for similar measures included in a variety of nationally and locally available sources. States also held local meetings with stakeholders to present the quality measure results, gauge reasonableness of the findings, and gather potential explanations for variation.

RESULTS

Following the process described previously, 8 data tables were populated for each state. The UDS tables included information at the member and encounter levels about member eligibility and demographic characteristics; professional procedures and diagnoses; facility information, diagnoses, and procedures; and pharmacy information (Figure 1). These tables were used to generate cost and quality measures within and across states.

Table 2 describes variation for age/gender-adjusted PMPM cost in the commercial populations, within and across states. Overall, the PMPM amounts within the state show variation among geographic areas, with rural areas exhibiting higher costs than urban areas.

Figure 29-12,15 describes the performance in 4 of the 8 quality process measures—Avoidance of Antibiotic Treatment in Adults With Acute Bronchitis, Adolescent Well Care, Chlamydia Screening, and Developmental Screening in the First Three Years of Life—for the 4 states. These 4 measures are highlighted to show the feasibility of across-state comparisons using measures that are relevant to large segments of the states’ patient populations. In general, performance variation was observed across the states; within the 4 states, urban areas had better performance for most measures than rural areas.

Developing a UDS was beneficial and provided sufficient standardization to streamline use of the data elements for common code application. Additional benefits of this approach included increased efficiency and scalability: Code to produce the selected quality measures could be written once and run in each region, standardized code helped ensure comparability of the results and avoided differing interpretations of measure specifications, additional states’/regions’ APCD data can be added using existing code once their data are in the UDS format, and additional measures can be added through 1-time coding. Any additional fields needed by the new measures can be added to the UDS; additional cross-sections (eg, system designation, system type, population characteristics, providers’ characteristics) can be added to stratify the data using either additional APCD fields or external data sources.

Developing the UDS required in-depth knowledge of each APCD, including structure and underlying completeness and accuracy. Deceptively dissimilar data elements must be well understood to be transformed to a standard UDS format. Knowledge of data completeness and accuracy supports a more robust data quality analysis, which leads to more comparable results. In addition to testing measure results for reasonability, states’ relationships with local stakeholders provided avenues for resolving questions regarding data elements, completeness, and accuracy. These relationships allowed APCDs to investigate root causes of data issues and increase stakeholders’ engagement in collective data submission improvement initiatives.

DISCUSSION

Although APCDs share common characteristics, many differences exist (Table 1). Participation is voluntary in some states and mandatory in others, thereby affecting the number and nature of payers represented in APCDs.21 In general, and for this project, the inclusion of self-insured plans varies across APCDs. Stewardship of the APCD differs across the participating states, which affects the ease of access to the data. APCD data formats will also vary, as will data validation processes, including the tolerance level for incomplete data or incorrect data formats.

The value of the external technical advisor and the creation of the UDS file format both contributed significantly to the success of the overall project. The external technical advisor supported coordination across the state teams and provided a framework for compiling the data, conducting quality checks, overseeing the development of the measure code, and running the results. The UDS limited the APCDs to comparable sets, with only the necessary fields, field names, and format needed to develop code to calculate the measures. Quality checks ensured that although the complete commercial population was not included, the results contained only complete medical eligibility and claims information, thereby accurately reflecting what is happening within each state’s commercial populations (as described in the Methods section). The UDS file format provided a flexible and scalable structure. This project team would recommend the development of a UDS tailored to the specific measures of interest for other APCD measure alignment efforts.

The cost and quality indicators described in this paper illustrate how state-level claims data sets can be standardized to support the development of comparable metrics. This work provides a foundational step toward developing a solid multifactorial model that considers a variety of state-, system-, and population-level characteristics that are necessary to explain healthcare performance variation within and across states.

States conducted reviews of available healthcare cost and quality reports and found existing reports to be comparable with the results of this project.22 States also consulted local stakeholders in each state to receive feedback about the findings; the feedback confirmed the reasonableness of the patterns found, and outlined some potential explanations for performance variation, within the states.

Use of locally administered data sources provided several advantages. The relationships between APCD administrators and healthcare stakeholders in the states helped with bidirectional communication to develop hypotheses before dedicating significant resources to complex statistical analysis. For nonmandated APCDs, trust developed through longstanding relationships led to the development of data use agreements that permit the use of allowed amounts, which can often be difficult to obtain due to the proprietary nature of payer—provider contract terms. Additionally, stakeholder participation helps with buy-in and engagement in using results to inform specific performance improvement initiatives.5

Limitations

APCDs’ data collection processes vary. APCDs have varying business rules around data collection, which might affect measures of per capita cost and quality. For example, substance use disorder diagnosis and treatment claims are systematically suppressed at the state level in Colorado and Oregon, whereas in Utah and Massachusetts, suppression varies by payer. Sensitivity analyses were conducted to look at the impact of suppression and found that for comparisons across these 4 states, suppression had minimal impact. Among other factors described above, the choice of quality measures was based on the data availability in these 4 states. Other states trying to conduct similar analyses need to consider their own data limitations and completeness as part of the process to select a suitable set of measures. Another variable is the availability of self-insured plans for inclusion in an APCD; self-insured plans were included to the extent that they were represented during the study year of 2014 and met the data quality standards applied to this project.

APCDs’ data use regulations vary. States use APCDs for transparency initiatives to inform state policy by creating mandated reports,4,23 but not all states have regulated data uses for operational purposes or to conduct research. For example, some APCD regulations only allow access to deidentified or aggregated data. Information about other states’ APCDs data uses can be found elsewhere.23 The use of APCDs, as it was described here, is only possible for those states with regulations or data use agreements that permit this type of work; however, it is possible for voluntary databases to enter into data commitments with stakeholders, gaining agreement on appropriate use of the data and technical considerations, such as data quality expectations and submission format.23

CONCLUSIONS

Applying standardized processes of quality control, as well as the creation of a UDS, provides a valuable path forward in leveraging state-level data sets for healthcare performance assessment and making meaningful comparisons across states. The processes and data structures created for this work could be extended to additional states, cost and quality measures, organizational measures, use of integrated care delivery models, practice capabilities, or population characteristics, and adjusted for additional factors such as comorbidities. Any of these use cases could leverage APCD data to inform state policy development and increase understanding of the drivers of value in healthcare.

Comparable healthcare quality and cost measures using APCDs could be produced to assess states’ performance, providing new insights into national and regional variability. This study demonstrates the feasibility of comparison across 4 states with vastly different geographies, healthcare policies, APCD mandates, and data ownership. Insights about results produced by this method are facilitated by strong relationships between local organizations and their stakeholders. This study also demonstrates the potential of APCD analyses, coupled with local knowledge generated within states, to maintain and utilize robust data sets.

As adoption of value-based payment arrangements accelerates, so will the interest in multistate comparisons of cost, quality, and utilization. There will be a growing need to tie results to specific market dynamics and the political, economic, and geographic factors that may be driving them. With sufficient standardization, APCDs could serve as an asset in studying health system performance.

As RHICs, the 4 state partners are trusted, neutral conveners governed by multistakeholder boards comprised of healthcare providers (both physicians and hospitals), payers (health insurance plans and government health coverage programs), purchasers of healthcare (employers, unions, retirement funds, and government), and consumers or consumer representatives. RHICs are an ideal partner in developing and implementing coordinated, multistakeholder solutions.

Acknowledgments

The authors wish to acknowledge certain individuals as follows. Contributing authors: Nancy Beaulieu, PhD; David Cutler, PhD; Stacy Donohue, MS; Jonathan Mathieu, PhD; and Meredith Roberts Tomasi, MPH. Technical advisor: Judy Loren. Lead data analysts: Edward Davies; Char Kasprzak, MPH; Paul McCormick; and Brantley Scott, MStat.Author Affiliations: Center for Improving Value in Health Care (MDJD-P), Denver, CO; HealthInsight Utah (RH), Murray, UT; HealthInsight Oregon (ES, DR), Portland, OR; Massachusetts Health Quality Partners (JC), Watertown, MA; Network for Regional Healthcare Improvement (EL), South Portland, ME.

Source of Funding: Agency for Healthcare Research and Quality Award No. U19HS024072-03. The content is solely the responsibility of the authors and does not necessarily represent the official view of the Agency for Healthcare Research and Quality.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (MDJD-P, RH); acquisition of data (MDJD-P, RH, DR, JC); analysis and interpretation of data (MDJD-P, RH, DR, JC); drafting of the manuscript (MDJD-P, RH, ES, DR, JC, EL); critical revision of the manuscript for important intellectual content (MDJD-P, RH, ES, DR, JC, EL); statistical analysis (MDJD-P); obtaining funding (ES, DR, JC, EL); administrative, technical, or logistic support (MDJD-P, RH, ES, EL); and supervision (EL).

Address Correspondence to: Emily Levi, MPH, Network for Regional Healthcare Improvement, 500 Southborough Dr, Ste 106, South Portland, ME 04106. Email: elevi@nrhi.org.REFERENCES

1. Freedman JD, Green L, Landon BE. All-payer claims databases—uses and expanded prospects after Gobeille. N Engl J Med. 2016;375(23):2215-2217. doi: 10.1056/NEJMp1613276.

2. Sarrazin MS, Rosenthal GE. Finding pure and simple truths with administrative data. JAMA. 2012;307(13):1433-1435. doi: 10.1001/jama.2012.404.

3. Peters A, Sachs J, Porter J, Love D, Costello A. The value of all-payer claims databases to states. N C Med J. 2014;75(3):211-213.

4. Ario J, McAvey K. Transparency in healthcare: where we stand and what policy makers can do now. Health Affairs Blog website. healthaffairs.org/do/10.1377/hblog20180703.549221/full. Published July 11, 2018. Accessed July 25, 2018.

5. Bardach NS, Lin GA, Wade E, et al. All-Payer Claims Databases Measurement of Care: Systematic Review and Environmental Scan of Current Practices and Evidence. Rockville, MD; Agency for Healthcare Research and Quality; June 2017. ahrq.gov/sites/default/files/publications2/files/envscanlitrev.pdf. Accessed June 6, 2018.

6. Network for Regional Healthcare Improvement; Utah Department of Health, Office of Health Care Statistics; Berry Dunn McNeil & Parker, LLC. Healthcare affordability: untangling cost drivers. Network for Regional Healthcare Improvement website. nrhi.org/uploads/benchmark_report_final_web.pdf. Published February 13, 2018. Accessed August 24, 2018.

7. Concannon TW, Fuster M, Saunders T, et al. A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. J Gen Intern Med. 2014;29(12):1692-1701. doi: 10.1007/s11606-014-2878-x.

8. PCOR grant awards [news release]. Rockville, MD: Agency for Healthcare Research and Quality; June 15, 2015. archive.ahrq.gov/news/newsroom/press-releases/2015/pcorawards.html. Accessed August 14, 2018.

9. National Committee for Quality Assurance. HEDIS 2014: Volume 2: Technical Specifications. Washington, DC: National Committee for Quality Assurance; 2013.

10. Child and Adolescent Well-Care Visits. National Committee for Quality Assurance website. ncqa.org/hedis/measures/child-and-adolescent-well-care-visits. Accessed April 2, 2019.

11. Chlamydia Screening in Women. National Committee for Quality Assurance website. ncqa.org/hedis/measures/chlamydia-screening-in-women. Accessed April 2, 2019.

12. Avoidance of Antibiotic Treatment in Adults With Acute Bronchitis. National Committee for Quality Assurance website. ncqa.org/hedis/measures/avoidance-of-antibiotic-treatment-in-adults-with-acute-bronchitis. Accessed April 2, 2019.

13. Follow-Up Care for Children Prescribed ADHD Medication. National Committee for Quality Assurance website. ncqa.org/hedis/measures/follow-up-care-for-children-prescribed-adhd-medication. Accessed April 2, 2019.

14. Antidepressant Medication Management. National Committee for Quality Assurance website. ncqa.org/hedis/measures/antidepressant-medication-management. Accessed April 2, 2019.

15. Oregon Health and Science University. Measure DEV-CH: developmental screening in the first three years of life. Oregon Pediatric Improvement Partnership website. oregon-pip.org/focus/Measure%20DEV-CH_2017.pdf. Accessed July 6, 2018.

16. Prevention Quality Indicators v6.0 ICD-9-CM benchmark data tables. Agency for Healthcare Research and Quality website. qualityindicators.ahrq.gov/modules/pqi_resources.aspx. Published October 2016. Accessed August 14, 2018.

17. Prevention quality acute composite technical specifications. Agency for Healthcare Research and Quality website. qualityindicators.ahrq.gov/Downloads/Modules/PQI/V60-ICD10/TechSpecs/PQI_91_Prevention_Quality_Acute_Composite.pdf. Published July 2016. Accessed April 2, 2019.

18. Prevention quality chronic composite technical specifications. Agency for Healthcare Research and Quality website. qualityindicators.ahrq.gov/Downloads/Modules/PQI/V60-ICD10/TechSpecs/PQI_92_Prevention_Quality_Chronic_Composite.pdf. Published July 2016. Accessed April 2, 2019.

19. Medicare Advantage network adequacy criteria guidance. CMS website. cms.gov/Medicare/Medicare-Advantage/MedicareAdvantageApps/Downloads/MA_Network_Adequacy_Criteria_Guidance_Document_1-10-17.pdf. Updated January 10, 2017. Accessed July 25, 2018.

20. Crosswalk: Medicare provider/supplier to healthcare provider taxonomy. CMS website. cms.gov/Medicare/Provider-Enrollment-and-Certification/MedicareProviderSupEnroll/Downloads/JSMTDL-08515MedicarProviderTypetoHCPTaxonomy.pdf. Published September 22, 2008. Updated October 2, 2008. Accessed August 14, 2018.

21. APCD legislation by state. APCD Council website. apcdcouncil.org/apcd-legislation-state. Accessed August 14, 2018.

22. Health Services Advisory Group. 2016 HEDIS aggregate report for Health First Colorado (Colorado’s Medicaid program). State of Colorado website. colorado.gov/pacific/sites/default/files/2016%20HEDIS%20Aggregate%20Report%20for%20Health%20First%20Colorado.pdf. Published October 2016. Accessed May 5, 2017.

23. Harrington A. Releasing APCD data: how states balance privacy and utility. APCD Council website. apcdcouncil.org/publication/releasing-apcd-data-how-states-balance-privacy-and-utility. Published March 2017. Accessed August 14, 2018.

Related Videos
Screenshot of an interview with Nadine Barrett, PhD
dr carol regueiro
dr carol regueiro
dr carol regueiro
Corey McEwen, PharmD, MS
dr linda bosserman
dr andrew leitner
dr joseph alvarnas
Screenshot of an interview with A. Mark Fendrick, MD
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo