Publication
Article
The American Journal of Managed Care
Author(s):
In this study, providers were more likely to achieve processes-ofcare goals when diabetes care was bundled at the indicator level than at the patient level.
Objectives:
To evaluate processes and outcomes of diabetes care using bundled indicators from a primary care registry of osteopathic training programs.
Study Design:
Retrospective cohort analysis.
Methods:
This study examined care delivered to 7333 patients across 95 family practice and internal medicine residency programs (July 1, 2005, through September 15, 2008) to determine diabetes care performance using measures of processes of care and outcomes. Two summary (bundled) reports of care for each measure were constructed. The first used the frequency of indicated care delivered (indicator-level bundle), and the second used the frequency of patients’ receiving all indicated care (patient-level bundle).
Results:
Use of the indicator-level bundle demonstrated that outcomes goals were achieved at a rate of 44.5%. Use of the patient-level bundle demonstrated that outcomes goals were achieved at a rate of only 16.2%, a significant difference (P <.001). Eight evidence-based processes of diabetes care were then examined using the 2 bundling methods. The indicator-level analysis mean rate for the bundled processes of care was 77.3%, whereas the patient-level analysis mean rate was only 33.5%. This was also significantly different (P <.001).
Conclusions:
The method of bundling care measures can have a profound effect on the reporting of goals achieved. This can in turn influence the assessment of provider performance and opportunity gaps in diabetes care delivery. In this study, providers were more likely to achieve processes-of-care goals when diabetes care was bundled at the indicator level than at the patient level. Standardization of summary reporting of diabetes care should be developed to enhance consistent interpretation of performance.
(Am J Manag Care. 2010;16(1):25-32)
The quality of diabetes care in the United States falls short of national standards, and performance measurement is intended to improve quality of care.
Diabetes mellitus, a disease that 1 million Americans are newly diagnosed as having each year, is frequently encountered by primary care physicians.1 It is estimated that the care of persons with diabetes in the United States costs $174 billion annually.1,2 Evidence-based ambulatory guidelines have been developed for diabetes care, including management of glucose level,3 lipid levels,3,4 and blood pressure.3,5
Despite high-quality studies6,7 supporting the benefit of multimodal intensive diabetes management, care has fallen short by all measures.8-10 For example, it has been repeatedly shown that less than 50% of persons with diabetes achieve target glycosylated hemoglobin (A1C) levels.11 One proposed method of improving diabetes care is to create incentives for physicians to better manage patients. Performance measurement is a system that can be used to provide incentives for care. With the recent increased focus on physician performance by the Centers for Medicare & Medicaid Services and by other payers, ambulatory measures of quality in diabetes care have been developed.3,11-13
Many experts believe that economic incentives are not aligned to reward higher quality of care. The financial incentives of the US primary care health system are based on the number of patients seen (quantity of care), not on quality of care. However, momentum is gaining to provide incentives for quality of care, or pay for performance. In a survey of 252 health maintenance organizations, more than half (covering >80% of the total enrolled) included pay for performance in their contracts.14 Several clinical trials have evaluated pay for performance.15-17 Lindenauer et al18 reported that hospitals that engaged in pay for performance achieved greater improvements in all composite measures of quality.
As performance measures of care have proliferated, there has been a drive to create summary measures of provider care. The next generation of performance measures may move beyond individual care goals and give recognition only when all composite end points have been reached.15,17,18 The theory behind this “all-or-none” (bundled) performance measurement is that, if all steps are not completed or outcomes achieved, the quality of care is still lacking. Models that measure bundled performance have been used in the measurement of hospital-delivered care. The Centers for Medicare & Medicaid Services in their 8th Scope of Work19 moved to a bundled approach in defining hospital care measures. For example, this has been applied to pneumonia care, congestive heart failure, and acute myocardial infarction.19 In addition, this model has been successful in reducing surgical infection rates in the hospital.20 However, the effect of bundling is unexplored in the outpatient setting.
There are several ways that measures can be bundled. Care can be bundled by the processes of care that are completed. This evaluates the systems built into a practice that assure continuity of care, such as reminders for eye examinations among persons with diabetes. More commonly, intermediate outcomes can be bundled to determine if all goals (eg, low-density lipoprotein cholesterol level, blood pressure, and A1C level) are achieved. This measures actions of the patient and of his or her physician.
Table 1
Furthermore, these 2 measures can be bundled by patient and by indicator. This bundling method can be applied to processes-of-care and outcomes achievement. The indicatorlevel bundle is the percentage of all processes of care indicated for all patients that are performed, and the patient-level bundle is the percentage of all patients who have received all indicated processes of care. An example is given in .
The relative value of these methods depends on how the performance information is being used. The indicator-level method provides a measure of operational efficiency, whereas the patient-level method provides a more patient-centric measure, potentially having more meaning to a patient and answering the question “what is my probability of receiving all indicated care or achieving all recommended outcomes?” The American Osteopathic Association developed the Clinical Assessment Program (AOA-CAP) database to serve as a quality improvement tool for physicians in training to evaluate the safety of patient care in the ambulatory setting. This primary care registry of osteopathic training programs uses evidence-based standards of care to consistently collect information on diabetes care.
The objective of this study was to evaluate diabetes care at family practice and internal medicine osteopathic residency training programs using the AOA-CAP database. Furthermore, we evaluated how the bundling of processes-of-care and outcomes measures affected the overall performance score. This study was approved by the Ohio State University Institutional Review Board.
Methods
Data Source
The AOA-CAP database, a Web-based primary care registry of osteopathic training programs, was used in this study. The AOA-CAP database collects information from family practice and internal medicine residency programs on processes of care and outcomes in a sample of their patients. For this study, we only accessed the diabetes measure data set. To enter information in the AOA-CAP database, residency programs are instructed to acquire a random sample from their diabetes medical records. Residents enter data using a standard set of disease-specific processes-of-care and outcomes measures. These reported measures are guided by national standardsetting organizations such as National Committee for Quality Assurance and the American Diabetes Association. These data are provided to the AOA annually from programs as part of the residency accreditation process. Reports regarding performance are then provided back to the program.
Subjects and Settings
Data were abstracted from AOA family practice and internal medicine residency programs between July 1, 2005, and September 15, 2008. Residents were instructed to enter only those patients having confirmed diagnosis of type 2 diabetes mellitus with at least 2 visits to the clinic in the previous year for diabetes. Patients treating their disease with lifestyle modification only were not included in this study. Programs were asked to choose 40 randomly selected patients who met the inclusion and exclusion criteria for the AOA-CAP database. However, not all programs had 40 patients who met these criteria. Programs contributing fewer than 20 patients were excluded from analysis. Data entered into this database were deidentified. The database provides information on care delivered to patients with diabetes, defined as having at least 2 visits with an International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis of diabetes mellitus during the study year and being treated for diabetes with a medication during the study year.
Table 2
Processes-of-Care and Outcomes Measures Processes and outcomes measures of diabetes care were used to assess the adequacy of diabetes care (see eAppendix available at www.ajmc.com). The processes-of-care and outcomes measures are consistent with those recommended by the National Quality Forum, National Committee for Quality Assurance, and American Medical Association Physician Consortium for Performance Improvement. Measures were vetted by the AOACAP steering committee. Processes-of-care measures identify the interaction between healthcare providers and patients, including diagnosis, surveillance of complications, and treatment of disease. Outcomes measures are the result of the interaction between patient and physician and the ability to get a patient to target goal. A summary of diabetes processes-of-care and outcomes indicators is given in .
The processes-of-care and outcomes measures were bundled using 2 methods. These are indicator-level and patientlevel analyses.
Processes of Care. An indicator-level processes-of-care bundle was created by developing a denominator of all processes of care for which patients with diabetes were eligible and a numerator of the number of times the indicated process of care was delivered. A patient-level processes-of-care bundle was created by using the patients as the denominator and the number of times the patients received all indicated care as the numerator.
Intermediate Outcomes. An indicator-level outcomes bundle was created by using the denominator of all opportunities for patients to achieve goals of blood pressure, lipid levels, and glucose control. The numerator represents the number of times the goals were achieved across all patients. Similarly, a patient-level outcomes bundle was created by using the patients as the denominator and the number of times the patients achieved all of the following goals: blood pressure less than 130/85 mm Hg, low-density lipoprotein cholesterol level less than 100 mg/dL, and A1C level less than 7% (to convert cholesterol level to millimoles per liter, multiply by 0.0259).
Statistical Analysis
Percentile distributions of programs were calculated based on the indicator or the patient as the unit of analysis. Performance was based on the proportion of goals achieved. SAS version 9.1 (SAS Institute, Cary, NC) was used in the percentile calculations.
The 2 methods were examined for differences in performance-based goals achieved using the following 3 comparisons: (1) indicator-level processes-of-care bundle versus indicator-level outcomes bundle, (2) patient-level processes-of-care bundle versus patient-level outcomes bundle, and (3) patient-level processes-of-care bundle versus indicator-level processes-of-care bundle. Pearson product moment correlation X2 analysis was performed. Statistical analysis was set at the 5% level. SPSS version 17.0 (SPSS Inc, Chicago, IL) was used in the calculations.
Results
Table 3
A total of 95 residency programs contributed 7333 cases of diabetes to the study. Programs contributed a maximum of 818 cases and a minimum of 20 cases, with a mean of 58 cases. The demographics of the patient sample are given in . The types of residency programs contributing data were almost evenly split, with 52.5% of cases contributed by family practice and the remainder by internal medicine. The mean age of the cohort was 56.9 years, with 56.0% of cases being female. All patients were treated with medication (by study criteria), with 64.8% of patients receiving oral hypoglycemic agents and the remainder receiving insulin or a combination of insulin and oral medication. White race/ethnicity was most frequent at 56.5%, followed by African American at 23.0%, Hispanic at 10.6%, and the remainder being other races/ethnicities or not reported.
Table 4
Analysis was based on the following 2 frames: (1) the indicator-level bundle, which treats each process of care or outcome as an opportunity to provide good care, and (2) the patient-level bundle, which provides an estimate of the percentage of patients receiving all indicated care or achieving all desired outcomes. Table 2 gives the results of the processes-of-care and outcomes measures and the mean rate of performance for each goal. gives the distribution of performance across programs using the 2 methods described for measurement. The distribution of the indicator-level bundle was higher at all percentiles for processes and outcomes of care.
Table 5
Table 6
Using the indicator-level bundle, the mean rate of performance on processes of care across all programs was 77.3%, and the mean rate of performance on outcomes was 44.5% (P <.001) ( and ). The patient-level bundle revealed that the mean rate of performance on processes of care across all programs was 33.5% and the mean rate of performance on outcomes was 16.2% (P <.001). Overall, the distributions for patient-level bundles were lower than those for indicator-level bundles.
Comparing the methods of bundling for processes of care revealed that the method of bundling also affected performance goals. Indicator-level processes-of-care bundle measures demonstrated that care was delivered 77.3% of the time across the population; when evaluating how many patients received indicated processes of care, this dropped to 33.5%, which was significantly lower (P = .001) (Table 5 and Table 6). A similar difference was found when comparing outcomes measures, with 44.5% of the population achieving the indicator-level bundle with controlled blood pressure, glucose, or lipid levels, but this dropped to 16.2% when evaluating the percentage of patients achieving all 3 controlled (patient-level bundle). This difference was also statistically significant (P <.001).
Discusion
The concept of pay for performance has been developed to reward systems of care that achieve desired outcomes and to limit incentives to those who do not meet standards of care. Using a bundled, or all-or-none, approach demands that systems of care be developed that incorporate a team approach and goal-focused care so that optimal care is provided. Proponents of the bundling method argue that it provides an example of best practices.
However, as shown herein, the method of bundling care has significant effects on performance achievement.21-24 For example, in this study, resident physicians were more likely to achieve the goal in processes of care (low-density lipoprotein cholesterol test ordered in the past year) as opposed to outcomes (low-density lipoprotein cholesterol level <100 mg/ dL). Completing a task is often easier than completing the task successfully. Meeting an outcome measure also involves factors outside of the physician’s control such as patient genetics, patient adherence, and system factors such as access to care and formulary of medications covered to treat the disease process. The need to adjust outcomes for various patient and system factors outside of a physician’s processes-of-care control has led to risk-adjustment methods in the inpatient setting where outcomes such as mortality are investigated.25 To date, performance measurement and bundling programs have shown mixed results for improvements in diabetes outcomes.26-29
In this study, there were significant differences in performance when using different bundling methods. There was an absolute difference of 33.7% comparing the frequency of processes of care when bundled by indicator level versus patient level. In addition, there was a 28.0% difference comparing the frequency of outcomes achieved when bundled by indicat-or-level versus patient-level bundle of outcomes achieved. The implications for this difference need to be understood in the context of use of the measurement.
However, each bundling method has disadvantages. At the indicator level, physicians may be able to “score” higher but not achieve the outcomes that are most important to patients. Patients tend to care about outcomes that will affect the quantity or quality of their lives. In addition, patient-level bundling may be complicated by factors outside of the physician’s control and may inadvertently disadvantage physicians based on the patients for whom they provide care. This may lead to patient profiling and selective access to care, which may not prove to serve the public health interest. When applying these bundling methods to performance review, it may be prudent to apply indicator-level bundling to practice and to apply negative reinforcements if the indicator-level bundle is considered the minimum basic standard. Negative reinforcements could include decreased reimbursement or lower physician rating. However, the application of the more stringent patient-level bundling could be applied with positive reinforcements (increased reimbursement, bonuses, etc) that
would award those who achieve best care practices.
Limitations of this study include self-reporting of the diabetes data without an external audit. Residency programs are required to participate in this registry, but performance is not used to accredit or grade the residents or their program. Therefore, there is no reason to believe that the data are inaccurate because of that pressure. In addition, previous performance measurement programs that rely only on external data collection have proven to be problematic.30 Furthermore, the AOA-CAP database was not developed to evaluate pay-forperformance evaluation, and the program may have been developed differently if developed for this purpose. Previous research has also raised questions regarding the reliability of individual physician report cards,
especially when these report cards are reporting outcomes data that can be affected by patient factors and by issues of sufficient power to determine the difference.22,23,30,31
However, there have been some early successes in use of bundling in outpatient diabetes care. Weber et al26 used bundling of processes of care and outcomes and an electronic medical record to improve diabetes care for an entire health system within a calendar year. In that study, there was a statistically significant increase in the number of patients who reached goal A1C level and blood pressure and who had received a pneumococcal vaccine. Projects such as these are proactive and, if reproducible, could provide stimulus for greater use of bundling care to improve outcomes.
Bundling of care can be useful in clinical care and in performance measurement. When dealing with large numbers of patients or physicians, bundling can provide a summary statistic that can be used over time to track progress and to demonstrate performance improvement. This could be used by physicians to market their practice or to provide head-to-head comparison with other regional physicians.
Unfortunately, a shortcoming of bundling of care is the assumption that each component of the bundle is of equal importance. Furthermore, bundling of outcomes will require some adjustment for factors outside of a physician’s control and may penalize those physicians who serve underserved communities. Scholle et al30,31 suggest that a reliability score should be applied when using composite measures for physicians. If financial incentives are tied to the bundling process, it is critical that they are applied uniformly and are directed toward behaviors that help to improve quality of care for the individual and for the general public. Synder et al32 recommend that performance measurement should be used only when several actions have been enacted, including ensuring transparency, measuring those elements that are important to patients, and monitoring and intervening for unwanted physician behavior (such as deselection of patients or gaming the system).
In conclusion, the method of bundling in this study—whether processes of care versus outcomes or indictor level versus patient level—statistically changed performance results. In addition, this study demonstrated that the AOACAP database can be a powerful tool for quality performance programs and can assist in the bundling of performance measures. Because bundling methods will be used in the future, physicians need to address patient-level and system-level variables to make significant changes in achieving these goals. We recommend that a careful and thorough evaluation of the bundling process should be explored before these methods are implemented into the healthcare system.
Author Affiliations: From the Department of Family Medicine (JHS) and CORE Research (GDB), Ohio University, Athens, OH; Applied Outcomes (RJS), Worthington, OH; and the Department of Quality and Research (SLM), American Osteopathic Association, Chicago, IL.
Funding Source: Support of the research team was provided by the Osteopathic Heritage Foundation.
Author Disclosure: Dr Shubrook reports receiving grants from Novo Nordisk, Osteopathic Heritage Foundation, sanofi-aventis, and Takeda. Ms McGill is an employee of the American Osteopathic Association, which funds the Clinical Assessment Program. Ms McGill also reports receiving a research grant from the Osteopathic Heritage Foundation to develop the manuscript. The other authors (RJS, GDB) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (JHS, RJS); acquisition of data (JHS, RJS, SLM); analysis and interpretation of data (JHS, RJS, GDB); drafting of the manuscript (JHS, RJS, GDB); critical revision of the manuscript for important intellectual content (JHS, GDB); statistical analysis (RJS, GDB); obtaining funding (SLM); administrative, technical, or logistic support (SLM); and supervision (JHS, SLM).
Address correspondence to: Jay H. Shubrook Jr, DO, Department of Family Medicine, Ohio University, 69 Elmwood Pl, Athens, OH 45701. E-mail: shubrook@ohio.edu.
1. Centers for Disease Control and Prevention. National diabetes fact sheet, 2007. http://www.cdc.gov/diabetes/pubs/pdf/ndfs_2007.pdf. Accessed January 25, 2009.
2. American Diabetes Association. Direct and indirect costs of diabetes in the US. http://www.cdc.gov/diabetes/pubs/pdf/ndfs_2007.pdf. Accessed January 9, 2010.
3. American Diabetes Association. Executive summary: standards of medical care in diabetes: 2008. http://care.diabetesjournals.org/cgi/reprint31Supplement_1/S5. Accessed January 25, 2009.
4. National Cholesterol Education Program. Adult treatment panel III guidelines at-a-glance: quick desk reference. http://www.nhlbi.nih.gov/guidelines/cholesterol/atglance.pdf. Accessed January 25, 2009.
5. National Heart, Lung, and Blood Institute. The seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7). http://www.nhlbi.nih.gov/guidelines/hypertension/. Accessed January 25, 2009.
6. Gaede P, Vedel P, Larsen N, Jensen GV, Parving HH, Pedersen O. Multifactorial intervention and cardiovascular disease in patients with type 2 diabetes. N Engl J Med. 2003;348(5):383-393.
7. Gaede P, Lund-Andersen H, Parving HH, Pedersen O. Effect of a multifactorial intervention on mortality in type 2 diabetes. N Engl J Med. 2008;358(6):580-591.
8. Resnick HE, Foster GL, Barsley J, Ratner RE. Achievement of the American Diabetes Association clinical practice recommendations among U.S. adults with diabetes, 1999-2002: the National Health and Nutrition Examination Survey. Diabetes Care. 2006;29(3):531-537.
9. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645.
10. Grant RW, Buse JB, Meigs JB; University HealthSystem Consortium (UHC) Diabetes Benchmarking Project Team. Quality of diabetes care in U.S. academic medical centers: low rates of medical regimen change. Diabetes Care. 2005;28(2):337-442.
11. National Committee for Quality Assurance Web site. http://www.ncqa.org. Accessed May 20, 2009.
12. National Committee for Quality Assurance. Health Employer Data Information Set (HEDIS) 2008. http://www.ncqa.org/tabid/536/Default.aspx. Accessed January 25, 2009.
13. Agency for Healthcare Research and Quality. The Ambulatory Care Quality Alliance recommended starter set: clinical performance measures for ambulatory care. http://www.ahrq.gov/qual/aqastart.htm. Accessed January 25, 2009.
14. Rosenthal MB, Landon BE, Normand SL, Frank RG, Epstein AM. Pay-for-performance in commercial HMOs. N Engl J Med. 2006;355(18):1895-1902.
15. An T, Bluhm J, Foldes SS, et al. A randomized trial of a pay-for-performance program targeting clinician referral to a state tobacco quitline. Arch Intern Med. 2008;168(18):1993-1999.
16. Nolan T, Berwick DM. All-or-none measurement raises the bar on performance. JAMA. 2006;295(10):1168-1170.
17. Dudley RA. Pay-for-performance research: how to learn what clinicians and policy makers need to know. JAMA. 2005;294(14):1821-1823.
18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486-496.
19. Centers for Medicare & Medicaid Services 8th Scope of Work. Section B: supplies or services and prices/costs. http://www.cms.gov/QualityImprovementOrgs/Downloads/8thSOW.pdf. Accessed November 12, 2009.
20. Dellinger EP, Hausmann SM, Bratzer DW, et al. Hospitals collaborate to reduce surgical site infections. Am J Surg. 2005;190(1):9-15.
21. Premier Inc. Summary of the composite quality scoring methodology. http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/resources/composite-scoring-overview.pdf. Accessed January 9, 2010.
22. Samuels TA, Bolen S, Yeh HC, et al. Missed opportunities in diabetes management: a longitudinal assessment of factors associated with sub-optimal quality. J Gen Intern Med. 2008;23(11):1770-1777.
23. Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG. The unreliability of individual physician “report cards” for assessing the costs and quality of care of a chronic disease. JAMA. 1999;281(22):2098-2105.
24. Greenfield S, Kaplan SH, Kahn R, Ninomiya J, Griffith JL. Profiling care provided by different groups of physicians: effects of patient case-mix (bias) and physician-level clustering on quality assessment results. Ann Intern Med. 2002;136(2):111-121.
25. Centers for Medicare & Medicaid Services. Medicare managed care risk adjustment method announced. http://www.cms.hhs.gov/apps/media/press/testimony.asp?Counter=100. Accessed May 25, 2009.
26. Weber V, Bloom F, Pierdon S, Wood C. Employing the electronic health record to improve diabetes care: a multifaceted intervention in an integrated delivery system. J Gen Intern Med. 2008;23(4):379-382.
27. Petitti DB, Contreras R, Ziel FH, Dudl J, Domurat ES, Hyatt JA. Evaluation of the effect of performance monitoring and feedback on care process, utilization, and outcome. Diabetes Care. 2000;23(2):192-196.
28. Mangione CM, Gerzoff RB, Williamson DF, et al; TRIAD Study Group. The association between quality of care and the intensity of diabetes disease management programs. Ann Intern Med. 2006;145(2):107-116.
29. Gray J, Millett C, Saxena S, Netuveli G, Khunti K, Majeed A. Ethnicity and quality of diabetes care in a health system with universal coverage: population-based cross-sectional survey in primary care. Gen Intern Med. 2007;22(9):1317-1320.
30. Scholle SH, Roski J, Adams JL, et al. Benchmarking physician performance: reliability of individual and composite measures. Am J Manag Care. 2008;14(12):833-838.
31. Scholle SH, Roski J, Dunn DL, et al. Availability of data for measuring physician quality performance. Am J Manag Care. 2009;15(1):67-72.
32. Synder L, Neubauer RL; American College of Physicians Ethics, Professionalism and Human Rights Committee. Pay-for-performance principles that promote patient-centered care: an ethics manifesto. Ann Intern Med. 2007;147(11):792-794.