Publication

Article

The American Journal of Managed Care

May 2013
Volume19
Issue 5

Variability in Resource Use: Diagnosing Colorectal Cancer

In a cohort of 449 patients with colorectal cancer in the VA health system, diagnostic resource use varied with facility, patient age, and patient presentation.

Objectives:

Efficient resource use is relevant in all healthcare systems. Although colorectal cancer is common, little has been published regarding the utilization of clinical resources in diagnosis.

Study Design:

The primary aim was to evaluate the patterns and factors associated with clinical services used to diagnose colorectal cancer at 14 US Department of Veterans Affairs facilities. The secondary aim was to investigate whether using more clinical services was associated with time to diagnosis.

Methods:

We reviewed medical records for 449 patients with colorectal cancer in an observational study. Study end points were the use of clinical diagnostic services grouped as laboratory tests, imaging studies, and subspecialty consultations. Cumulative logistic regression models were used to explore factors associated with each outcome.

Results:

Facility variability contributed to the variability of resource use in all models. In adjusted analyses, older patients had higher use of laboratory tests (odds ratio [OR], 1.20; 95% confidence interval [CI], 1.02-1.43) and incidentally discovered colorectal cancer was associated with increased use of consultations (OR, 1.97; 95% CI, 1.27-3.05), imaging studies (OR, 1.70; 95% CI, 1.12-2.58), and laboratory tests (OR, 3.14; 95% CI, 2.06-4.77) compared with screen-detected cancers. There was a strong direct correlation between thenumber of diagnostic services performed and the median time to diagnosis (Spearman correlation coefficient, 0.99; P <.001).

Conclusions:

Variability in utilization of diagnostic clinical services was associated with patient age, patient presentation, and facility. Increased resource use was highly correlated with increased time to diagnosis.

Am J Manag Care. 2013;19(5):370-376

The study evaluated the diagnostic process of 449 patients with colorectal cancer at 14 US Department of Veterans Affairs (VA) medical centers.

  • Significant variation occurred by facility despite all patients and facilities being in the VA health system.

  • Resource use was also associated with patient factors, such as age and mode of detection.

  • A strong correlation was found with median time to diagnosis and an increasing number of diagnostic tests.

  • While the study was not designed to identify the optimal practice pattern, if best practices could be identified in future studies then these strategies could be used to improve efficiency in the healthcare system.

Efficient use of resources to diagnose and treat common illnesses is relevant for all healthcare systems. Although colorectal cancer is the third-leading cancer in both men and women,1 little has been published regarding resource utilization during diagnosis. Instead, prior studies of colorectal cancer evaluating the diagnostic process have focused on sources of potential diagnostic delay including patient delay,2-7 practitioner delay,2,6-8 and systems delay.2,9,10 Guidelines for diagnostic strategies are not well specified in the literature except in the case of abnormal screening tests.11,12 In addition, little is known about current practices for evaluating suspected cases of colorectal cancer. Finally, it is unclear if different patterns of resource use yield equivalent outcomes.13,14

The Veteran’s Health Administration (VHA) is an integrated system with the operations and informatics infrastructure to facilitate both a more standardized approach to diagnosis across multiple facilities and improved coordination of care. If differences in resource utilization lead to different outcomes in the diagnosis of colorectal cancer, such differences could be further characterized to identify best practices and improve patient care throughout the system. The primary purpose of this study was to examine practice patterns and factors associated with medical services use in the diagnosis of colorectal cancer in a sample of patients at multiple, geographically diverse US Department of Veterans Affairs (VA) facilities. The secondary aim was to explore the impact of resource use on the outcome time to diagnosis.

PATIENTS AND METHODS

Setting and Sample

Figure 1

The Cancer Care Outcomes Research and Surveillance Consortium (CanCORS) is a large prospective observational cohort of newly diagnosed colorectal and lung cancer patients with 7 collaborating Primary Data Collection and Research (PDCR) teams which have enrolled approximately 10,000 subjects.15 One of the 7 PDCR teams is a group of 14 Veterans Affairs centers (Atlanta, Baltimore, Biloxi, Brooklyn, Chicago-Hines, Chicago-Lakeside, Durham, Houston, Indianapolis, Minneapolis, Nashville, Seattle, Temple, and Tucson). Approximately 15% of the total CanCORS study population is from the VHA system. The methods of CanCORS have been previously published.15 Briefly, eligible patients were greater than 21 years old and had had colorectal or lung cancer diagnosed within 3 months of enrollment. Subjects were enrolled at VA sites from September 2003 through June 2005. The VA PDCR team enrolled all 470 eligible VA patients with colorectal cancer in the cohort ().

Data Sources

The data sources and collection protocols have been previously described.16 Briefly, the main CanCORS study medical record abstraction protocol included baseline patient information and data regarding cancer-related care (for diagnosis, treatment, or surveillance) received in a window from 3 months prior to the diagnosis date until 15 months after the diagnosis date. The CanCORS date of diagnosis was the date that a tissue diagnosis of invasive cancer was confirmed. We also used medical record data regarding diagnostic tests, radiological studies, procedures, and consultations abstracted from a period of 24 months prior to the diagnosis date.16 The Can- CORS Steering Committee approved this study, as did the Durham VA Institutional Review Board and Research and Development Committee.

Independent Variables

The VA CanCORS medical record abstraction provided the diagnosis and the patient-level variables: age, gender, race (collapsed into Caucasian and non-Caucasian), marital status (married, unmarried), cancer stage at diagnosis, and comorbidity level (none, mild, moderate, or severe) using the Adult Comorbidity Evaluation (ACE-27) index.17 Stage at diagnosis was collapsed into early stage (stage 1 or 2) and late stage (stage 3 or metastatic). All variables except cancer stage were obtained prior to the diagnosis of colorectal cancer.

As previously described,16 diagnostic category was assigned as 1 of 3 a priori determined, mutually exclusive categories based on the process by which the patient was diagnosed: screen-detected, symptom-detected, and other (ie, in the process of evaluating another medical concern). Patients who presented emergently with obstruction or perforation were excluded from the analysis because such patients underwent diagnostic tests and treatment immediately after presentation.

Finally, 2 facility-level covariates were considered: complexity score and academic affiliation (association with medical school residency training program). Complexity score (low, medium, high) is a summary of 7 variables representing patient volume and severity, availability of certain services, and levels of teaching and research.18

Study End Points

Diagnostic clinical services, including imaging (plain film, computed tomography [CT] scan, magnetic resonance imaging [MRI], ultrasound, barium enema, endoscopic ultrasound), laboratory tests (hemoglobin, hematocrit, ferritin, iron, carcinoembryonic antigen [CEA], fecal occult blood test [FOBT], and “other” laboratory tests), and subspecialty consultations (surgical, gastroenterology, hematology/ oncology) were abstracted. Within each diagnostic clinical service group (imaging, laboratory tests, subspecialty consultation) each different type of test or consultation performed (eg, hematocrit or ferritin would be different types of services) was first documented with up to 5 incidences of each service within the pre-diagnosis study period. For analysis, a specific type of test or consultation was counted only once regardless of the number of times the clinical service was documented (eg, if a patient had 5 different hematocrit measurements, the variable was only counted once). Then, the number of different types of imaging tests, laboratory tests, or subspecialty consultations was tallied by group. For example, a patient who had only FOBT, CEA, and barium enema would have tallied counts of 1 imaging, 2 laboratory tests, and 0 subspecialty consultation. Tally categories with low counts were consolidated to improve numerical stability and reduce the variance of estimates.

Initial event date was defined as the abnormal screening test result date (screen-detected), the first medical visit documenting a symptom (symptom-detected), or the abnormal test result date (other). Time to diagnosis was defined as the time from initial event until the diagnosis date.

Statistical Analysis

Diagnostic clinical services utilization was analyzed using cumulative logistic regression modeling.19 Separate models were used for each diagnostic service group outcome (imaging studies, laboratory tests, subspecialty consultation). All models were adjusted for inter-facility variation by use of a categorical covariate indicating facility identity. The cumulative logistic model was chosen because of its appropriateness for handling the ordered categories of counts of the types of services utilized within a given diagnostic service group. The primary assumption of the cumulative logistic model, that the effect of each covariate is proportionate across the ordered categories of the outcome variable (extent of utilization), was tested using the score test for the proportional odds assumption (P <.01 indicates a potentially important violation of the assumption) and, if needed, by the partial proportional odds method.20

For each study end point, candidate variables (described above under the independent variables section) were examined using unadjusted (bivariate) regressions. Variables moderately associated with the outcome (P <.25) were included in the final multivariable model. Final multivariable models are presented for each outcome as odds ratios, associated 95% confidence intervals, and P values. Each reported odds ratio that is greater than 1.0 can be interpreted as increasing the odds of the outcome (more utilization of the variable). For example, an odds ratio of 2.0 would increase the odds of higher utilization of a variable, while an odds ratio of 0.5 would decrease the odds of higher utilization.

Table 1

Recognizing the potential for bidirectional causality, the association between time to diagnosis and the number of different tests was explored using the Spearman correlation coefficient ().

Data analysis was conducted at the Durham VA Medical Center, the coordinating site for the VA PDCR team. Can- CORS data set versions used were: Core 1.14, Survey 1.11, and MRA 1.12. Statistical analyses were performed using SAS for Windows Version 9.2 (SAS Institute, Cary, North Carolina).

RESULTS

Sample Characteristics

Table 2

Figure 2

VA CanCORS included 470 subjects with colon or rectal cancer. Of these, 21 were excluded from the analysis because they presented emergently. The baseline demographic information of the cohort is shown in . Gender was not included in the models because of the small number of female participants included in the study. All but 1 study site had the highest complexity score and all the sites were academically affiliated. Thus, the values of these variables did not exhibit enough variation across the 14 sites to be useful and were not considered in the models. All patients had undergone colonoscopy. Therefore, this procedure was not counted with the diagnostic services outcome. is a cross-tabulation of the types of clinical services tallied by group (imaging, laboratory tests, subspecialty consultation) and the number of subjects who received each total tally of different clinical services within each group. While some subjects had undergone up to 6 different imaging tests, 6 different laboratory tests, or 4 different subspecialty consultations, each of the 3 outcomes was consolidated to 4 values: 0, 1, 2, >3 (as previously discussed in the Methods section).

Table 3

Facility was a significant source of variability in the use of imaging (P <.0001), consults (P <.0001), and laboratory tests (P = .012) and was included as an adjustment variable in all 3 models (Table 3). In the adjusted model, increasing age was significantly associated with an increase in use of laboratory tests (), although there was no evidence of an association between cumulative imaging test use and age and only a suggestion of association between consultation use and age. The diagnostic category was associated with differences in cumulative clinical services use. Diagnostic category “other” subjects had significantly increased utilization in all medical service categories (imaging, laboratory tests, and consultations) when compared with subjects with screen-detected neoplasms (Table 3). Symptom-detected diagnoses had significantly increased utilization for laboratory tests and consultations.

There was no evidence of significant differences in cumulative diagnostic services utilization by race, stage of disease, comorbidity index (ACE-27), or marital status.

The models for imaging tests and special consults did not violate model assumptions by the score test for the proportional odds assumption. The model for the laboratory tests failed the proportional odds test. We further investigated the model using the partial proportional odds method and found that the nonproportionality had little influence on estimates.

The relationship between the mean time to diagnosis and the number of different clinical services utilized is shown in Table 1. The near monotonically increasing means suggest a strong association between increasing time to diagnosis and counts of types of diagnostic clinical services received (Spearman correlation coefficient, 0.99 P <.001). For counts of 3 or greater, the mix of diagnostic categories remains constant, therefore changes in this mix do not account for the increasing time to diagnosis. Likewise, the proportions of the 3 types of services (imaging, laboratory tests, subspecialty consultations) are basically constant as well (data not shown).

CONCLUSIONS

In this multisite study, the utilization of resources to diagnose colorectal cancer varied significantly by facility, patient age, and clinical presentation. Some of these differences could be anticipated based on clinical experience. For example, one would expect additional investigative efforts in patients who have cancer diagnosed while another clinical concern is being evaluated compared with asymptomatic patients diagnosed while undergoing a screening test. The fact that we found fewer tests in the asymptomatic patients provides face validity to the medical record documentation and data collection. Surprisingly, there was no difference in utilization by comorbidity burden. Patients with many comorbid conditions might prove more difficult to diagnose given distracting medical conditions, but this hypothesis was not borne out in this cohort of patients.

Inter-facility variation was highly significant in the use of imaging, laboratory tests, and subspecialty consultations. Within the United States, regional differences in the healthcare spending and resource allocations have been previously documented by evaluating the use of Medicare dollars.21-23 Although geographic differences have also been documented in the VHA system,24 to our knowledge no study has documented facility differences in resource use to diagnose colorectal cancer within the VHA or other systems. The delivery systems of the VHA should provide similar funding and incentives throughout the system and the facilities included in this study were almost all in the highest resource category by complexity score. Nonetheless, the extent to which specific resources such as subspecialty consultation and different imaging modalities are readily available likely vary by facility. In addition, local physician practice patterns and community standards of care, independent of specific resource availability, may vary by facility. This current study was not designed to ascertain the reason for such variability, but further investigation is warranted.

The impact of higher versus lower resource use on patient outcomes, such as time to diagnosis, is also an important consideration. Anecdotally, some physicians initially order a broad array of tests in an attempt to decrease time to diagnosis, especially for patients who may have difficulty making repeated trips to a physician’s office. Alternatively, increased use of diagnostic services could reflect less efficient diagnostic processes such as a patient being evaluated by a hematologist for iron deficiency anemia prior to an endoscopic evaluation. Finally, this finding could be an artifact reflecting more opportunity to perform tests if the time interval until diagnosis was longer. In the secondary analysis we observed a strong direct correlation between resource use and mean time to diagnosis. Future research creating and testing diagnostic decision aids or interventions to increase coordination of medical services across specialties could potentially decrease resource use and increase diagnostic efficiency.

The results should be considered in the context of the study’s limitations. The study relies on data abstracted from the medical record. The VA medical record is electronic and linked, allowing for more comprehensive data collection, but some VA patients receive medical services outside of the VA system. However, the study team had the same access to documented non-VA medical care that the VA physician caring for the patient would have had. While it is possible that other studies were obtained at non-VA facilities and not documented in the VA medical record, it is unlikely that those undocumented tests and results were known to the VA physicians or impacted what medical services they obtained to make a diagnosis. It was not possible to reconstruct the clinical assessment driving the diagnostic decisions, nor can we establish causality between testing and diagnostic delay. Therefore, it is premature to make judgments or recommendations regarding the best diagnostic strategies based on our results. Certainly, some patients have a more complex presentation and require increased diagnostic testing to obtain the correct diagnosis. Additionally there could be a discrepancy in the severity of disease (either colorectal cancer or comorbidities) at different facilities. However, while there may have been differing degrees of severity of disease, there was no evidence of differing usage of resources based either on stage of disease or comorbidity index score. Thus, even if significant differences were present, it is unclear if they would significantly change the outcome. Another potential limitation is that the patient population may not be representative of other populations. While there was a male predominance, the remaining characteristics such as age and comorbidity burden are consistent for a population with colorectal cancer. Lastly, the data reflect practices in a single health system in the diagnosis of a single malignancy. Extrapolating these data to alternative healthcare systems and other conditions may prove difficult.

This study also has a number of strengths. The data were collected with a standardized abstraction of protocol to enhance data quality. Additionally, the data are from a comprehensive linked electronic medical record in a US-based healthcare system that included care at any VA facility and provided clinical details and information that may not be available in administrative databases. The large sample from different multiple centers across the VHA system allowed exploration of interfacility variation that was not confined to 1 region or group of hospitals. Additionally, this analysis provides an excellent accounting of studies performed in the diagnosis of colorectal cancer which had not been previously addressed in the literature. Lastly, the use of the VHA system, in which physicians are salaried, provides insight into variable practice patterns independent of physician financial incentives.

All healthcare systems strive for maximal efficiency of both time and resources in care for patients. Such efficiency is particularly important for a system such as the VHA.25 This study shows that significant variations exist in the resource utilization prior to a diagnosis of colorectal cancer. If future investigations can define best practices, these strategies could provide guidance for improved efficiency and coordination for colorectal cancer diagnosis in a variety of healthcare settings.

Author Affiliations: From Division of Gastroenterology, Department of Medicine (FDS), Duke University Medical Center, Durham, NC; Epidemiologic Research and Information Center (DHA), Durham Veterans Affairs Medical Center, Durham, NC; Center for Health Services Research in Primary Care (SCG), Durham Veterans Affairs Medical Center, Durham, NC; Department of Biostatistics and Bioinformatics (SCG), Duke University Medical Center, Durham, NC; Durham Veterans Affairs Medical Center (DP), Durham, NC; Division of Gastroenterology (DP), Department of Medicine, Duke University Medical Center, Durham, NC; Division of Gastroenterology and Hepatology (RSS), Department of Medicine, University of North Carolina (RSS), Chapel Hill, NC; Center for Health Services Research in Primary Care (DAF), Durham Veterans Affairs Medical Center, Durham, NC; Division of Gastroenterology (DAF), Department of Medicine, Duke University Medical Center, Durham, NC.

Funding Source: The work of the CanCORS consortium was supported by grants from the National Cancer Institute to the Statistical Coordinating Center (U01 CA093344) and the NCI supported Primary Data Collection and Research Centers (Dana-Farber Cancer Institute/Cancer Research Network U01 CA093332, Harvard Medical School/Northern California Cancer Center U01 CA093324, RAND/UCLA U01 CA093348, University of Alabama at Birmingham U01 CA093329, University of Iowa U01 CA.01013, University of North Carolina U01CA 093326) and by a Department of Veterans Affairs grant to the Durham VA Medical Center CRS 02-164.

Author Disclosures: Dr Grambow reports receiving consulting fees for Data and Safety Monitoring Board membership for Gilead Sciences hepatitis C drug trials. Dr Provenzale was supported in part by a NIH K24 grant 5 K24 DK002926. Dr Fisher reports receiving consulting fees from Epigenomics, Inc. She was supported in part by a VA Health Services Research and Development Career Development Transition Award (RCD 03-174). The other authors (FDS, DHA, RSS) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (SCG, DAF); analysis and interpretation of data (FDS, DHA, SCG, RSS, DP, DAF); drafting of the manuscript (FDS, SCG, DAF); critical revision of the manuscript for important intellectual content (FDS, DHA, SCG, RSS, DP, DAF); statistical analysis (DHA, SCG); provision of study materials or patients (DAF); obtaining funding (DAF); and supervision (DAF).

Address correspondence to: F. Douglas Srygley, MD, Division of Gastroenterology, Department of Medicine, Duke University Medical Center, DUMC Box 3913, Durham, NC 27710. E-mail: douglassrygley@gmail.com.

1. Jemal A, Siegel R, Ward E, et al. Cancer Statistics 2009. CA Cancer J Clin. 2009;59(4):225-249.

2. Mitchell E, Macdonald S, Campbell NC, et al. Influences on pre-hospital delay in the diagnosis of colorectal cancer: a systematic review. Br J Cancer. 2008;98(1):60-70.

3. Korsgaard M, Pedersen L, Sørensen HT, et al. Reported symptoms, diagnostic delay and stage of colorectal cancer: a population-based study in Denmark. Colorectal Dis. 2006;8(8):688-695.

4. Langenbach MR, Schmidt J, Neumann J, Zirngibl H. Delay in treatment of colorectal cancer: a multifactorial problem. World J Surg. 2003;27(3):304-308.

5. Bain NS, Campbell NC, Ritchie LD, et al. Striking the right balance in colorectal cancer care--a qualitative study of rural and urban patients. Fam Pract. 2002;19(4):369-374.

6. Mariscal M, Llorca J, Prieto D, et al. Determinants of the interval between the onset of symptoms and diagnosis in patients with digestive tract cancers. Cancer Detect Prev. 2001;25(5):420-429.

7. Young CJ, Sweeney JL, Hunter A. Implications of delayed diagnosis in colorectal cancer. Aust N Z J Surg. 2000;70(9):635-638.

8. Carter S, Winslet M. Delay in the presentation of colorectal carcinoma: a review of causation. Int J Colorectal Dis. 1998;13(1):27-31.

9. Robertson R, Campbell NC, Smith S, et al. Factors influencing time from presentation to treatment of colorectal and breast cancer in urban and rural areas. Br J Cancer. 2004;90(8):1479-85.

10. Ahuja N, Chang D, Gearhart SL. Disparities in colon cancer presentation and in-hospital mortality in Maryland: a ten-year review. Ann Surg Oncol. 2007;14(2):411-416.

11. Engstrom P, Arnoletti JP, Benson AB et al. NCCN Practice Guidelines-Colorectal Cancer. http://www.nccn.org/professionals/physician_gls/f_guidelines.asp. Published September 10, 2010. Accessed October 15, 2010.

12. Levin B, Lieberman DA, McFarland B, et al. American Cancer Society Colorectal Cancer Advisory Group; US Multi-Society Task Force; American College of Radiology Colon Cancer Committee. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. Gastroenterology. 2008;134(5):1570-1595.

13. Bodenheimer T, Wagner EH, Grumbach K. Improving primary care for patients with chronic illness: the chronic care model, part 2. JAMA. 2002;288(15):1909-1914.

14. Jaén CR, Ferrer RL, Miller WL, et al. Patient outcomes at 26 months in the patient-centered medical home National Demonstration Project. Ann Fam Med. 2010;8(suppl 1):S57-S67.

15. Ayanian JZ, Chrischilles EA, Fletcher RH, et al. Understanding cancer treatment and outcomes: the Cancer Care Outcomes Research and Surveillance Consortium. J Clin Oncol. 2004;22(24):2992-2996.

16. Fisher DA, Zullig LL, Grambow SC, et al. Determinants of medical system delay in the diagnosis of colorectal cancer within the Veteran Affairs Health System. Dig Dis Sci. 2010;55(5):1434-1441.

17. Piccirillo JF, Creech C, Zequeira R. Inclusion of comorbidity into oncology data registries. J Registry Manag. 1999;26(2):66-70.

18. VA Facility Complexity Model. Oncology Program Evaluation, Facilities Survey Report at www.va.gov/cancer/. Accessed February 7, 2010.

19. Allison, Paul D. “Logisitic Regression using the SAS System: Theory and Application.” Cary, NC: SAS Institute Inc.

20. Stokes ME, Davis CS, Koch GG. “Categorical Data Analysis Using the SAS System.” Cary, NC: SAS Institute Inc.

21. Wennberg JE, Fisher ES, Skinner JS. Geography and the debate over Medicare reform. Health Aff (Millwood). 2002;W96-W114.

22. Welch WP, Miller ME, Welch HG, et al. Geographic variation in expenditures for physicians’ services in the United States. N Engl J Med. 1993;328(9):621-627.

23. Fisher ES, Wennberg JE, Stukel TA, et al. Associations among hospital capacity, utilization, and mortality of US Medicare beneficiaries, controlling for sociodemographic factors. Health Serv Res. 2000;34(6):1351-1362.

24. Ashton CM, Petersen NJ, Souchek J, et al. Geographic variations in utilization rates in Veteran’s Affairs hospitals and clinics. N Engl J Med. 1999;340(1):32-39.

25. Yaisawarng S, Burgess JF, Jr. Performance-based budgeting in the public sector: an illustration from the VA healthcare system. Health Econ. 2006;15(3):295-310.

Related Videos
Screenshot of Adam Colborn, JD during an interview
Wanmei Ou, PhD, vice president of product, data analytics, and AI at Ontada
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Corey McEwen, PharmD, MS
dr linda bosserman
dr andrew leitner
Glenn Balasky during a video interview
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo