Publication

Article

The American Journal of Managed Care

September 2023
Volume29
Issue 9

Comparison of Primary Payer in Cancer Registry and Discharge Data

Enrollment in managed care among Medicaid enrollees presents challenges to classifying Medicaid coverage in cancer registries.

ABSTRACT

Objectives: To determine agreement between variables capturing the primary payer at cancer diagnosis across the Pennsylvania Cancer Registry (PCR) and statewide facility discharge records (Pennsylvania Health Care Cost Containment Council [PHC4]) for adults younger than 65 years, and to specifically examine factors associated with misclassification of Medicaid status in the registry given the role of managed care.

Study Design: Cross-sectional analysis of the primary cancer cases among adults aged 21 to 64 years in the PCR from 2010 to 2016 linked to the PHC4 facility visit records.

Methods: We assessed agreement of payer at diagnosis (Medicare, Medicaid, private, other, uninsured, unknown) across data sources, including positive predictive value (PPV) and sensitivity, using the PHC4 records as the gold standard. The probability of misclassifying Medicaid in registry was estimated using multivariate logit models.

Results: Agreement of payers was high for private insurance (PPV, 89.7%; sensitivity, 83.6%), but there was misclassification and/or underreporting of Medicaid in the registry (PPV, 80%; sensitivity, 58%). Among cases with “other” and “unknown” insurance, 73.8% and 62.1%, respectively, had private insurance according to the PHC4 records. Medicaid managed care was associated with a statistically significant increase of 12.6 percentage points (95% CI, 9.4-15.8) in the probability of misclassifying Medicaid enrollment as private insurance in the registry.

Conclusions: Findings suggest caution in conducting and interpreting research using insurance variables in cancer registries.

Am J Manag Care. 2023;29(9):455-462. https://doi.org/10.37765/ajmc.2023.89425

_____

Takeaway Points

Many researchers rely on cancer registries to study the impact of insurance coverage on cancer outcomes. However, complexities in coverage, including the use of private managed care plans within public programs, complicate precise classification of payers. Using Pennsylvania’s cancer registry linked to state inpatient and outpatient facility visit records, we examined misclassification of payers. Our analysis revealed the following:

  • Misclassification and underreporting of Medicaid insurance was high in cancer registry data.
  • Medicaid managed care increases the probability that Medicaid is misclassified as private insurance.
  • Researchers should be mindful of insurance data issues when conducting and interpreting analyses using payer information in registries.

_____

State cancer registries capture most cancer diagnoses within a population and are an important data source for cancer surveillance and research.1 One major body of research has relied on cancer registries to document and evaluate disparities among patients with cancer by types of insurance coverage, including comparing Medicaid beneficiaries with those covered by commercial insurance.2-9 Many studies have examined disparities in cancer staging, treatment, and survival across insurance types based on the primary payer information recorded in the registries.4,8,10-13 They broadly conclude that insurance coverage and type of health plan, whether private or public or managed care vs fee-for-service, are associated with access to care and health outcomes.

However, the primary payer recorded in cancer registries may be imprecise, which could affect the estimates of disparities by types of coverage.14-17 In the United States, complex coverage details of health insurance plans, such as public coverage provided through private managed care, may lead to misclassification of the primary payer. The potential for misclassification is particularly concerning when estimating disparities in cancer care associated with Medicaid, the largest single source of insurance in the United States, which provides coverage for low-income or disabled adults.18,19 Because Medicaid primarily serves populations with pronounced barriers to care,20-23 understanding the role and effectiveness of the program in addressing cancer-related disparities is critical. However, increasing and varying reliance on managed care organizations (MCOs) across states complicates accurate classification of Medicaid because many MCOs serve individuals enrolled in Medicaid, those eligible for Medicare, and those who purchase plans in private insurance markets.24 Hence, there is an ample opportunity for miscoding across Medicaid, Medicare, and private insurance. The prospect of misclassification underscores the need to evaluate the payer variable in cancer registries to determine the appropriateness of registry-based analysis of disparities.

Therefore, our objective was to examine the primary payer variable in cancer registries using the Pennsylvania Cancer Registry (PCR) as a case study. We performed a linkage of the cancer registry data with hospital-based facility procedure records collected by an independent state agency (Pennsylvania Health Care Cost Containment Council, or PHC425) and estimated the extent to which insurance variables agreed across the linked data sources. Further, we assessed factors associated with misclassification of Medicaid in the registry data, including enrollment in Medicaid MCOs, which cover almost 96% of Medicaid beneficiaries in Pennsylvania.2625) and estimated the extent to which insurance variables agreed across the linked data sources. Further, we assessed factors associated with misclassification of Medicaid in the registry data, including enrollment in Medicaid MCOs, which cover almost 96% of Medicaid beneficiaries in Pennsylvania.

METHODS

Data

We identified all first lifetime primary cancer cases among adults aged 21 to 64 years in the PCR from 2010 to 2016. The PCR is a population-based cancer registry that has been certified by the North American Association of Central Cancer Registries (NAACCR) and the National Program of Cancer Registries.27 It includes information on patient demographics and details about incident cancer cases reported by multiple sources, including hospitals, doctor’s offices, and death certificates. We excluded patients with missing or invalid diagnosis month information, a second cancer diagnosis within 365 days of the initial cancer incidence, or diagnosis date established through autopsy or death (Figure). The study was reviewed and determined exempt under Category 4 (secondary research) by the Institutional Review Board at the University of Pittsburgh.

Each case in the registry was linked to the PHC4 inpatient discharge and outpatient procedure records (2010-2017) using a unique masked patient identifier provided by state agencies. The PHC4 is an independent state agency authorized to collect public data from health care facilities in Pennsylvania.28 It collects approximately 2 million inpatient hospital discharge and hospital outpatient procedure records every year. All licensed health care facilities in Pennsylvania are legally obligated to submit quarterly data that are validated before being made available to researchers. The inpatient records include all inpatient discharges, including psychiatric/behavioral health, rehabilitation, and drug and alcohol records (skilled nursing facility, swing bed, transitional care, 23-hour observation, and hospice records are excluded).29 The outpatient records include a range of procedures performed within a hospital facility or other facilities associated with the hospital such as freestanding ambulatory surgery centers, ambulatory surgery rooms, short-term procedure units, laboratories, and the like.

Because the primary outcome of the analysis was insurance status around the time of diagnosis, all PHC4 records, rather than only cancer-related encounter records, were included. The linked data used for the current analysis also include information on time between the facility encounter and the date of cancer diagnosis. Any linked PHC4 records occurring more than 30 days before or after cancer diagnosis were excluded, given our focus on insurance status around the time of diagnosis.

Supplementary area-level data were merged with patient-level records based on patient Census tract of residence in the registry. Rural residence was defined based on rural-urban commuting area (RUCA) codes from the US Department of Agriculture, with urban areas defined as RUCA codes 1 to 3, large town areas as RUCA codes 4 to 6, and rural areas as RUCA codes 7 to 10.30 The Area Deprivation Index (ADI) was used to capture Census tract–level socioeconomic status, with quartile 1 representing the least disadvantaged areas and quartile 4 representing the most disadvantaged.31

Classification of Insurance

Payer variable coding and categories differ across the PCR and the PHC4 records. In the registry, a single “primary payer at diagnosis” variable records the patient’s primary insurance at the time of diagnosis and/or treatment,32 whereas the PHC4 records contain 3 payer variables, populated based on all sources of payment for each visit. We created a mutually exclusive insurance variable that summarizes payer information from each data source using 6 categories in the following hierarchy: Medicare (including Medicare-Medicaid duals), Medicaid, private, other, uninsured, and unknown. For the registry data, we recoded the predefined categories; for the PHC4 records, we used a combination of the 3 payer variables to designate insurance status (eAppendix A [eAppendices available at ajmc.com]). When classifying insurance, we used the PHC4 record closest to the diagnosis within the 30-day window around the time of diagnosis.

Statistical Analysis

We performed a cross-tabulation of insurance variables across the 2 data sources to calculate the positive predictive value (PPV) and sensitivity of the payer information in the registry, using the PHC4 as the gold standard based on the assumption that the PHC4 records are more precise because they are based on billing information from a single inpatient or outpatient visit and are recorded in a more detailed manner for reimbursement purposes. In the context of our study, PPV measures how well the registry accurately captures a pool of patients within a particular payer category. On the other hand, sensitivity measures whether the registry correctly categorized the payer sources as listed on the PHC4 records. In addition, we calculated Cohen κ statistics to test the reliability of insurance categorization between the 2 data sources.

To predict the likelihood of misclassifying Medicaid in the registry, we limited the sample to cases classified as having Medicaid (either managed care or fee-for-service) in the PHC4 records and estimated logit models with Medicaid misclassifications in the registry (overall misclassification and misclassification as private insurance) as outcomes while controlling for managed care enrollment and other patient- and area-level covariates. We clustered the SEs at the PHC4 facility level because the classification of payers is likely correlated with coding practices within each facility. We report the marginal effects of misclassification. All analyses used SAS version 9.0 (SAS Institute) and Stata/MP 17 (StataCorp).

We also conducted 2 stratified analyses. First, we stratified our main analysis to years before (2010-2014) and after (2015-2016) Pennsylvania expanded Medicaid to cover newly eligible adults with incomes below 138% of the federal poverty line33 because Medicaid expansion changed the composition of the population covered by Medicaid34 and increased enrollment in managed care plans to accommodate the newly eligible enrollees.35 Second, we separately analyzed misclassifications by 2 age groups—adults younger vs older than 50 years—to account for higher incidence of cancer among older adults.36

RESULTS

Nearly all cases in the registry had at least 1 PHC4 record (n = 190,765; 94%). Our analytic sample consisted of the 77% (n = 146,365) of matched cases in the registry that had a linked record within 1 month before or after cancer diagnosis. A little more than half of the analytic sample also had a record coinciding with the day of cancer diagnosis (n = 81,693; 56%). The patients included in the analytic sample tended to be women (57.5%), older than 45 years (82.6%), and non-Hispanic White (80.4%); have localized cancer (40.7%); and live in urban areas (87.1%) (Table 1 [part A and part B]). Compared with the full sample, there was a larger proportion of non-Hispanic Black patients (24.1%), lung cancer (16.2%), distant cancer stage (33.8%), and individuals in the fourth-quartile ADI group (49.5%) among cases classified as having Medicaid in the PHC4 records. In the full analytic sample, the most common source of coverage was private insurance (64.6%), followed by Medicaid (11.5%).

The comparison of the payers revealed varying levels of agreement between the registry and the PHC4 records by insurance type (Table 2). PPV was moderately high for Medicaid (80%), meaning that 80% of patients for whom the primary payer in the registry was Medicaid indeed were covered by Medicaid according to the PHC4 records. However, sensitivity was low (58%), indicating that among Medicaid-insured individuals in the PHC4 records, the registry only classified approximately half of them as having Medicaid. In contrast, both PPV and sensitivity were high for private insurance (89.7% and 83.6%, respectively). In addition, the rate of misclassification was high for patients who had “other” or “unknown” insurance, most of whom had private insurance according to the PHC4 records. Finally, 42.7% of patients classified as uninsured in the registry were insured by Medicaid according to the PHC4 records. The κ statistic was 0.544 (95% CI, 0.540-0.547), indicating a moderate level of agreement in insurance classifications.

In the logit models (Table 3), having Medicaid managed care was associated with a higher probability of overall misclassification of Medicaid (across any other insurance category) and misclassification of Medicaid as private insurance by 2.6 percentage points (95% CI, –2.1 to 7.3; P = .278) and 12.6 percentage points (95% CI, 9.4-15.8; P < .001), respectively. The likelihood of any Medicaid misclassification was lower among non-Hispanic Black patients compared with non-Hispanic White patients and among those living in nonurban areas or the highest ADI quartile. The likelihood of Medicaid misclassification as private insurance specifically was lower among Hispanic patients and all age groups older than 50 years but higher among those who lived in the lowest ADI quartile. The likelihood of misclassification as private insurance remained significant and consistent when stratified by age groups and time periods before and after Medicaid expansion (Table 4, eAppendix B, and eAppendix C).

DISCUSSION

We examined the concordance of the primary payer variable in a large statewide cancer registry by linking the PCR with Pennsylvania’s facility encounter records data, contributing to the limited prior literature that explored linkages of the registries with other data sources such as Medicaid and Medicare claims and enrollment files,15,37,38 all-payer claims databases (APCDs),39,40 hospital registries,41 and electronic health records claims from health systems.42,43 In our study, we found only a moderate level of agreement in insurance classification between the registry and the facility encounter records (κ = 0.544). Although the agreement was high for private insurance, there were substantial misclassification and underreporting of Medicaid in the registry. Moreover, a sizeable proportion of cases in the registry with unknown/other insurance was identified as having private insurance in the PHC4 records. Most patients who were classified as uninsured in the registry had Medicaid according to the PHC4 records. Lastly, Medicaid managed care was significantly associated with a higher probability of misclassifying Medicaid as private insurance. Overall, these findings demonstrate the limitations of the primary payer variable in cancer registries in ascertaining sources of insurance coverage among adult patients with cancer who are younger than 65 years.

Our results are comparable to those of earlier studies using registry/administrative data linkages. For example, Chan et al used the Medi-Cal enrollment files and reported that there was a moderate PPV (77%) and a poor sensitivity (48%) for Medicaid in the California cancer registry.15 Perraillon et al linked the Colorado Central Cancer Registry to the state’s APCD and found a higher PPV and sensitivity for Medicaid (97% and 70%, respectively) and a similar agreement for private insurance (86%-88% for both PPV and sensitivity) (unpublished data, October 2022). The higher rate of agreement in their study may result from the inclusion of all claims (as opposed to only facility-based records) and the fact that Colorado’s Medicaid program does not have managed care plans administered by private insurance companies. In contrast, Medicaid MCOs covered nearly all Medicaid enrollees in Pennsylvania in 2021.26

The primary payer variable in cancer registries has been recognized by registrars as one of the most difficult fields to populate due to lack of uniform training and the complex and constantly evolving landscape of insurance plans.16,44 This could explain the finding that enrollment in a Medicaid MCO significantly increased the probability of misclassification as private insurance, even though the reporting manual for the PCR specifies a category for Medicaid administered through an MCO.32 Because MCOs often serve enrollees across Medicaid, Medicare, and private insurance plans, there is a greater potential for miscoding public insurance such as Medicaid as private insurance. We also found that the association between Medicaid managed care and misclassification in the registry remained significant in years after Pennsylvania’s Medicaid expansion, suggesting that misclassification will persist with increasing penetration of Medicaid managed care.

Given the evidence of misclassification, we suggest caution in using and interpreting analyses that rely on the payer variable in cancer registry data. Although we do not formally quantify the impacts of such misclassifications on the estimates of disparities, our findings behoove researchers to consider the appropriateness of the registry-based payer variable in specific contexts. For example, the use of the payer variable may be reasonable when defining a sample of cases within each payer category, especially for payers such as private insurance, which had a high PPV. However, comparing outcomes between payers can be problematic due to the low sensitivity for certain payer categories. Commonly, studies of disparities in cancer outcomes compare Medicaid beneficiaries with privately insured individuals. If we simply assume that a substantial portion of patients with Medicaid is mostly misclassified as having private insurance (42% in our study), then the comparison will bias the estimates of disparities toward the null due to the cross-sorting of cancer cases between Medicaid and private insurance. However, there may be miscoding of Medicaid and private insurance as other categories as well. This, combined with selection into plans of different benefit designs and care quality, suggests that bias could run in either direction that may not be predicted ex ante. We conducted a simple, unadjusted assessment of disparities in 5-year mortality rates between Medicaid and privately insured patients diagnosed with 4 major cancers (breast, colorectal, lung, and prostate) using the payer variable in the registry vs the PHC4 records and found that the discordance in the registry generally underestimated the estimates of disparities across cancer types and stages. This is consistent with our hypothesis that misclassification across these 2 payer types will attenuate disparities estimates (eAppendix D).

Although our study of misclassifications is specific to the use of cancer registries, it represents an important case study that illuminates one of many data-related issues that researchers may encounter when examining the highly complex system of insurance coverage in the United States. In particular, as the boundary between public and private insurance becomes increasingly blurred, researchers must grapple with complicated questions about how to conceptualize different insurance classifications.45 For individuals enrolled in private managed care plans under Medicaid, the primary payer is Medicaid, but depending on the research question of interest, researchers need access to high-quality data that accurately and consistently account for granular coverage details, such as private administration of public coverage. We demonstrate how linkages to ancillary data sources (such as PHC4) that contain multiple, detailed payer-type variables may support more nuanced analyses of coverage.46 Such linkages may be also valuable in investigating secondary insurance, which is relevant for adults older than 65 years, most of whom have Medicare supplemental coverage. For example, multiple payer-type variables may better facilitate identification of individuals who are dually eligible for Medicare and Medicaid who may be coded as having either Medicaid or Medicare (depending on how one conceptualizes the primary payer for these individuals) but not “Medicare with Medicaid eligibility,” which is one of the categories for the primary payer variable in the NAACCR system.47

Limitations

We acknowledge that the facilities data used in our study are provided by hospitals and outpatient facilities and do not represent a true gold standard in verifying the insurance information. However, we assume that they are more reliable sources to ascertain coverage information because they are generated by facilities that have financial incentives to accurately and comprehensively code for reimbursement purposes.48 Our linked data also include most, if not all, hospital-based records for each cancer case, which is an improvement from registry-only data that record insurance information at a single point in time. Moreover, compared with other studies using Medicaid enrollment records or APCDs, our data also capture the uninsured, which is an important population to monitor because they are more likely to experience suboptimal cancer outcomes compared with the insured.7 Even so, facility records are not perfect. Retroactive coverage,49 for example, is expected to be reflected in the PHC4 records because hospitals report the payer from which they expect to receive reimbursement, but we cannot rule out the possibility that a portion of uninsured and Medicaid-insured patients is misclassified because the data do not capture the complexity of presumptive eligibility or retroactive enrollment.

CONCLUSIONS

Considering the high rate of misclassification in cancer registries, we recommend that researchers supplement the registry with ancillary data sources whenever possible for analyses that require detailed and accurate insurance coverage information. At a minimum, researchers should consider how the unique insurance landscape (including managed care penetration) in each state may create state-level variation in payers and populations covered by different payers when conducting and interpreting such analyses. 

Author Affiliations: Department of Health Policy and Management, University of Pittsburgh School of Public Health (YK, CD, LMS), Pittsburgh, PA; Department of Health Systems, Management & Policy, University of Colorado Anschutz Medical Campus (MCP, CJB), Aurora, CO; Department of Urology, Division of Health Services Research, University of Pittsburgh School of Medicine (BLJ), Pittsburgh, PA; Colorado School of Public Health (CJB), Aurora, CO.

Source of Funding: The study was funded by a grant from the Agency for Healthcare Research and Quality (R01HS027396).

Author Disclosures: Mr Kwon and Drs Drake and Sabik report grant funding from the Agency for Healthcare Research and Quality (R01HS027396). The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (MCP, BLJ, LMS); acquisition of data (CJB, LMS); analysis and interpretation of data (YK, MCP, CD, BLJ, CJB, LMS); drafting of the manuscript (YK, MCP, LMS); critical revision of the manuscript for important intellectual content (YK, MCP, CD, BLJ, CJB, LMS); statistical analysis (YK, MCP, CD, LMS); obtaining funding (LMS); and supervision (BLJ, LMS).

Address Correspondence to: Youngmin Kwon, BA, University of Pittsburgh School of Public Health, A610 Public Health, 130 DeSoto St, Pittsburgh, PA 15261. Email: yok84@pitt.edu.

REFERENCES

1. White MC, Babcock F, Hayes NS, et al. The history and use of cancer registry data by public health cancer control programs in the United States. Cancer. 2017;123(suppl 24):4969-4976. doi:10.1002/cncr.30905

2. Bradley CJ, Gardiner J, Given CW, Roberts C. Cancer, Medicaid enrollment, and survival disparities. Cancer. 2005;103(8):1712-1718. doi:10.1002/cncr.20954

3. Kneuertz PJ, Kao LS, Ko TC, Wray CJ. Regional disparities affect treatment and survival of patients with intrahepatic cholangiocarcinoma—a Texas Cancer Registry analysis. J Surg Oncol. 2014;110(4):416-421. doi:10.1002/jso.23664

4. Sung JM, Martin JW, Jefferson FA, et al. Racial and socioeconomic disparities in bladder cancer survival: analysis of the California Cancer Registry. Clin Genitourin Cancer. 2019;17(5):e995-e1002. doi:10.1016/j.clgc.2019.05.008

5. Walker GV, Grant SR, Guadagnolo BA, et al. Disparities in stage at diagnosis, treatment, and survival in nonelderly adult patients with cancer according to insurance status. J Clin Oncol. 2014;32(28):3118-3125. doi:10.1200/JCO.2014.55.6258

6. Ellis L, Canchola AJ, Spiegel D, Ladabaum U, Haile R, Gomez SL. Trends in cancer survival by health insurance status in California from 1997 to 2014. JAMA Oncol. 2018;4(3):317-323. doi:10.1001/jamaoncol.2017.3846

7. Niu X, Roche LM, Pawlish KS, Henry KA. Cancer survival disparities by health insurance status. Cancer Med. 2013;2(3):403-411. doi:10.1002/cam4.84

8. Coburn N, Fulton J, Pearlman DN, Law C, DiPaolo B, Cady B. Treatment variation by insurance status for breast cancer patients. Breast J. 2008;14(2):128-134. doi:10.1111/j.1524-4741.2007.00542.x

9. Roetzheim RG, Pal N, Gonzalez EC, Ferrante JM, Van Durme DJ, Krischer JP. Effects of health insurance and race on colorectal cancer treatments and outcomes. Am J Public Health. 2000;90(11):1746-1754. doi:10.2105/ajph.90.11.1746

10. Berrian JL, Liu Y, Lian M, Schmaltz CL, Colditz GA. Relationship between insurance status and outcomes for patients with breast cancer in Missouri. Cancer. 2021;127(6):931-937. doi:10.1002/cncr.33330

11. Parikh AA, Robinson J, Zaydfudim VM, Penson D, Whiteside MA. The effect of health insurance status on the treatment and outcomes of patients with colorectal cancer. J Surg Oncol. 2014;110(3):227-232. doi:10.1002/jso.23627

12. Parikh-Patel A, Morris CR, Kizer KW. Disparities in quality of cancer care: the role of health insurance and population demographics. Medicine (Baltimore). 2017;96(50):e9125. doi:10.1097/MD.0000000000009125

13. Shah AA, Sun Z, Eom KY, et al. Treatment disparities in muscle-invasive bladder cancer: evidence from a large statewide cancer registry. Urol Oncol. 2022;40(4):164.e17-164.e23. doi:10.1016/j.urolonc.2021.12.004

14. Sabik LM, Bradley CJ. Understanding the limitations of cancer registry insurance data—implications for policy. JAMA Oncol. 2018;4(10):1432-1433. doi:10.1001/jamaoncol.2018.2436

15. Chan JK, Gomez SL, O’Malley CD, Perkins CI, Clarke CA. Validity of cancer registry Medicaid status against enrollment files: implications for population-based studies of cancer outcomes. Med Care. 2006;44(10):952-955. doi:10.1097/01.mlr.0000220830.46929.43

16. Sherman RL, Williamson L, Andrews P, Kahn A. Primary payer at DX: issues with collection and assessment of data quality. J Registry Manag. 2016;43(2):99-100.

17. Gershman S, Weiss N, Knowlton R, Solis A, Das B. An assessment of the primary payer variable among breast and colorectal cancer cases in the Massachusetts Cancer Registry, 2005-2009. J Registry Manag. 2017;44(4):143-145.

18. Rudowitz R, Garfield R, Hinton E. 10 things to know about Medicaid: setting the facts straight. Kaiser Family Foundation. March 6, 2019. Accessed February 25, 2022. https://web.archive.org/web/20220210161030/https://www.kff.org/medicaid/issue-brief/10-things-to-know-about-medicaid-setting-the-facts-straight/

19. Gabow P, Daschle T. Fifty years later: why Medicaid still matters. Health Affairs. July 29, 2015. Accessed February 25, 2022. https://www.healthaffairs.org/do/10.1377/forefront.20150729.049632

20. Choi SK, Adams SA, Eberth JM, et al. Medicaid coverage expansion and implications for cancer disparities. Am J Public Health. 2015;105(suppl 5):S706-S712. doi:10.2105/AJPH.2015.302876

21. Corallo B, Moreno S. Analysis of recent national trends in Medicaid and CHIP enrollment. Kaiser Family Foundation. February 2, 2022. Accessed February 25, 2022. https://web.archive.org/web/20220221235631/
https://www.kff.org/coronavirus-covid-19/issue-brief/analysis-of-recent-national-trends-in-medicaid-and-chip-enrollment/

22. Racial and ethnic disparities in Medicaid: an annotated bibliography. Medicaid and CHIP Payment and Access Commission. April 2021. Accessed February 25, 2022. https://www.macpac.gov/wp-content/
uploads/2021/04/Racial-and-Ethnic-Disparities-in-Medicaid-An-Annotated-Bibliography.pdf

23. Koroukian SM, Bakaki PM, Raghavan D. Survival disparities by Medicaid status. Cancer. 2012;118(17):4271-4279. doi:10.1002/cncr.27380

24. Hinton E, Rudowitz R, Stolyar L, Singer N. 10 things to know about Medicaid managed care. Kaiser Family Foundation. October 29, 2020. Accessed January 25, 2022. https://web.archive.org/web/20220121193245/https://www.kff.org/medicaid/issue-brief/10-things-to-know-about-medicaid-managed-care/

25. About the council. Pennsylvania Health Care Cost Containment Council. Accessed October 7, 2021. https://www.phc4.org/council/mission.htm

26. Share of Medicaid population covered under different delivery systems. Kaiser Family Foundation. Accessed December 10, 2021. https://bit.ly/44sew7Y

27. Pennsylvania Cancer Registry. Pennsylvania Department of Health. Accessed December 6, 2021.
https://www.health.pa.gov/topics/Reporting-Registries/Cancer-Registry/Pages/Cancer%20Registry.aspx

28. PHC4 – 16 years of results. Pennsylvania Health Care Cost Containment Council. November 2002. Accessed November 21, 2022. https://www.phc4.org/wp-content/uploads/phc4fyi14.pdf

29. Data reporting information for new facilities. Pennsylvania Health Care Cost Containment Council. Accessed February 28, 2022. https://www.phc4submit.org/ProviderInformation.aspx

30. RUCA data: code definitions, version 2.0. Rural Health Research Center. Accessed February 25, 2022. https://depts.washington.edu/uwruca/ruca-codes.php

31. About the Neighborhood Atlas. Center for Health Disparities Research. Accessed February 25, 2022. https://www.neighborhoodatlas.medicine.wisc.edu/

32. Pennsylvania Cancer Registry reporting manual. Pennsylvania Department of Health. Updated 2021.
Accessed December 6, 2021. https://www.health.pa.gov/topics/Documents/Reporting-Registries/
PCR%20Reporting%20Manual.pdf

33. Medicaid expansion to the new adult group. Medicaid and CHIP Payment and Access Commission. Accessed October 21, 2022. https://www.macpac.gov/subtopic/medicaid-expansion/

34. Hill SC, Abdus S, Hudson JL, Selden TM. Adults in the income range for the Affordable Care Act’s Medicaid expansion are healthier than pre-ACA enrollees. Health Aff (Millwood). 2014;33(4):691-699. doi:10.1377/hlthaff.2013.0743

35. Norris L. Medicaid eligibility and enrollment in Pennsylvania. HealthInsurance.org. Accessed October 21, 2022. https://www.healthinsurance.org/medicaid/pennsylvania/

36. Age and cancer risk. National Cancer Institute. Updated March 5, 2021. Accessed October 21, 2022. https://www.cancer.gov/about-cancer/causes-prevention/risk/age#:~:text=The%20incidence%20rates%20for%20cancer,groups%2060%20years%20and%20older

37. Schrag D, Virnig BA, Warren JL. Linking tumor registry and Medicaid claims to evaluate cancer care delivery. Health Care Financ Rev. 2009;30(4):61-73.

38. Nadpara PA, Madhavan SS. Linking Medicare, Medicaid, and cancer registry data to study the burden of cancers in West Virginia. Medicare Medicaid Res Rev. 2012;2(4):mmrr.002.04.a01. doi:10.5600/mmrr.002.04.a01

39. Garvin JH, Herget KA, Hashibe M, et al. Linkage between Utah All Payers Claims Database and Central Cancer Registry. Health Serv Res. 2019;54(3):707-713. doi:10.1111/1475-6773.13114

40. Perraillon MC, Liang R, Sabik LM, Lindrooth RC, Bradley CJ. The role of all-payer claims databases to expand central cancer registries: experience from Colorado. Health Serv Res. 2022;57(3):703-711. doi:10.1111/1475-6773.13901

41. Meguerditchian AN, Stewart A, Roistacher J, Watroba N, Cropp M, Edge SB. Claims data linked to hospital registry data enhance evaluation of the quality of care of breast cancer. J Surg Oncol. 2010;101(7):593-599. doi:10.1002/jso.21528

42. Thompson CA, Jin A, Luft HS, et al. Population-based registry linkages to improve validity of electronic health record–based cancer research. Cancer Epidemiol Biomarkers Prev. 2020;29(4):796-806. doi:10.1158/1055-9965.EPI-19-0882

43. Lau EC, Mowat FS, Kelsh MA, et al. Use of electronic medical records (EMR) for oncology outcomes research: assessing the comparability of EMR information to patient registry and health claims data. Clin Epidemiol. 2011;3:259-272. doi:10.2147/CLEP.S23690

44. Coding pitfalls. Presented at: 2014-2015 North American Association of Central Cancer Registries Webinar Series; 2015. Accessed August 7, 2023. https://crgc-cancer.org/wp-content/uploads/2013/10/Coding-Pitfalls-2015-Slides.pdf

45. Gran B. A second opinion: rethinking the public-private dichotomy for health insurance. Int J Health Serv. 2003;33(2):283-313. doi:10.2190/BV3W-0JAR-R61K-6KAU

46. Buchmueller TC, Allen ME, Wright W. Assessing the validity of insurance coverage data in hospital discharge records: California OSHPD data. Health Serv Res. 2003;38(5):1359-1372. doi:10.1111/1475-6773.00181

47. Chapter X: data dictionary. North American Association of Central Cancer Registries. Accessed January 15, 2022. http://datadictionary.naaccr.org/default.aspx?c=10&Version=22

48. Verrill C. Assessing the reliability and validity of primary payer information in central cancer registry data. North American Association of Central Cancer Registries. Accessed October 21, 2022. https://www.naaccr.org/wp-content/uploads/2016/11/Assessing-the-Reliability-and-Validity-of-Primary-Payer-Information-in-Central-Cancer-Registry-Data.pdf

49. Chattopadhyay A, Bindman AB. Accuracy of Medicaid payer coding in hospital patient discharge data: implications for Medicaid policy evaluation. Med Care. 2005;43(6):586-591. doi:10.1097/01.mlr.0000163654.27995.fa

Related Videos
Sandra Cuellar, PharmD
James Chambers, PhD
dr carol regueiro
dr carol regueiro
Screenshot of Adam Colborn, JD during an interview
dr carol regueiro
Wanmei Ou, PhD, vice president of product, data analytics, and AI at Ontada
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Screenshot of an interview with James Chambers, PhD
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo