Publication

Article

The American Journal of Managed Care

September 2009
Volume15
Issue 9

Diabetes Care Quality: Insurance, Health Plan, and Physician Group Contributions

Reporting physician group performance in addition to health plan performance may stimulate greater improvement in diabetes care.

Objective:

To study the relative contributions of insurance product (commercial, Medicaid, Medicare), health plan, and physician group to quality of diabetes care.

Study Design:

Cross-sectional observational study using data reported by Minnesota Community Measurement (MNCM) on care provided in 2005.

Methods:

Individual performance rates for glycosylated hemoglobin (A1C) level <7%, lowdensity lipoprotein cholesterol (LDL-C) level <100 mg/dL, blood pressure <130/80 mm Hg, documented tobacco-free status, and aspirin use, as well as an all-or-none composite measure (Optimal Diabetes Care), were obtained from MNCM for 57 physician groups. Results included 7169 patients connected with 1 of 8 health plans and 1 of 3 types of insurance product.

Results:

All factors studied had a relationship to quality results. Of the factors, insurance product has the strongest relationship to A1C, LDL-C, and Optimal Diabetes Care. Health plan had the strongest relationship to tobacco-free status and daily aspirin, and physician group had the strongest relationship to blood pressure control. Physician group results varied more than those for health plan or insurance product on most measures.

Conclusions:

All factors studied contribute to diabetes quality results. Reporting both physician group performance and health plan performance may offer a greater opportunity to improve care than reporting only health plan or only physician group results.

(Am J Manag Care. 2009;15(9):585-592)

Physician group, health plan, and insurance product all affect diabetes quality of care. The relative contributions of these elements were examined in this cross-sectional observational study.

  • Reporting physician group performance in addition to health plan performance may stimulate greater improvement in patient care.
  • Collaboration enhances a region’s ability to report physician group results.
  • Incentives to focus on improving the quality of care community-wide might encourage collaborative efforts to report physician group performance.

A wide variety of factors are believed to contribute to the level of measured care quality for patients in the US healthcare system.1 Among these factors are the characteristics of the insurance product, health plan, physician group, individual physician, and individual patients.2

Patient ethnicity, sex, and socioeconomic status have been shown to impact quality, as have provider group organizational structure, provider group culture, and the use of patient registries, electronic health records, and other practice supports.3-11 Health plan results vary by geography, for-profit versus not-for-profit status, patient ethnicity, enrollee socioeconomic status, and other factors.12-16 The Medicaid insurance product is associated with generally lower-quality results, whereas Medicare analysis demonstrates that for many measures, quality varies inversely with overall medical spending.17,18

Publicly reporting healthcare quality results is becoming more common.19-21 The Centers for Medicare & Medicaid Services provides comparative quality-of-care results about hospitals, nursing homes, home care agencies, and health plans online.22 The National Committee on Quality Assurance reports quality-of-care results for health plans and by type of insurance product (commercial, Medicaid, Medicare).23 Regional organizations such as Minnesota Community Measurement (MNCM), Massachusetts Health Quality Partners, the Wisconsin Collaborative for Healthcare Quality, and the California Office of the Patient Advocate report quality-of-care results for physician group practices.24-27 Other programs report results by individual physician.28

Although insurance product, health plan, and physician group have been demonstrated to affect measured quality of care, few studies have attempted to determine the importance of these elements relative to each other.

Studies comparing the relative importance of different levels of the healthcare system generally have found that smaller units within the healthcare system contribute more to quality-of-care results. For example, clinic site seems to have a stronger effect on patient experience of care than physician group and health plan; patient and physician factors appear to have a greater effect on diabetes results than clinic site; and the care delivery system has been found to be more related to clinical performance than the health plan.12,29,30 We decided to take advantage of a unique data set in Minnesota to compare the relative contribution of health plan, insurance product, and physician group to quality of care for diabetes.

We hypothesized a priori that quality-of-care results would vary across all 3 levels. We predicted that the variation would be the greatest among physician groups and that physician group would contribute more to quality-of-care results than either health plan or insurance product. We also predicted that the results of a composite quality measure using all-or-none scoring and individual quality measures would evidence similar relative contributions to quality. We expected that physician group practice management decisions, practice culture, and office systems would have a stronger effect on patient quality of care than any influences at a higher level in the healthcare system. Physician groups establish practice policies, make investment decisions about electronic health records, implement and maintain patient registries and other office systems, set performance expectations for staff, and interact directly with patients during encounters for medical care.

Health plans focus on improving quality results for their enrollees, often with strategies that supplement care provided by physician practices like telephonic disease management. Physician groups typically care for a mix of patients from most or all health plans within their region, so health plan quality results reflect the average quality of care provided to enrollees by many different physician groups.

METHODS

Background

Minnesota Community Measurement is an organization that provides standardized publicly reported measures of care quality across many different physician groups, 8 different health plans, and 3 different insurance product types in Minnesota. Data collected by MNCM provide a unique opportunity to address the relative impact of health plan, insurance product, and physician group on quality of diabetes care.

Seven Minnesota health plans and the Minnesota Medical Association initiated MNCM in 2002. Minnesota Community Measurement combines health plan data to measure and publicly report quality of care across multiple clinical domains in more than 70 primary care and multispecialty physician group practices in Minnesota.

Study Hypothesis

The study was designed to assess how much of observed variation in care quality is attributable to health plans compared with insurance products and physician groups. The study also assessed whether using a single all-or-none composite measure of quality would yield results regarding variation and factor contribution similar to those obtained when assessing individual quality measures.

To simplify the assessment, we decided to limit our analysis to a single condition (diabetes) that has several well-recognized performance measures that have good evidence of a relationship to patient outcomes and for which an all-or-none composite measure has been reported.31 We analyzed comparative performance results for glucose control, blood pressure (BP) control, lipid control, tobacco-free status, aspirin use, and Optimal Diabetes Care (a composite of all 5 measures) as publicly reported by MNCM at www.MNHealthcare.org to address the study questions.

Data

Minnesota Community Measurement uses health plan administrative billing data to identify patients of each physician group with various conditions (eg, diabetes, asthma, hypertension). The data set reflects patients enrolled in managed care plans, including not only patients enrolled in commercial health maintenance organization/point-of-service/preferred provider organization products, but also Medicare (cost and risk products) and Minnesota public healthcare programs (Prepaid Medical Assistance, MinnesotaCare, and General Medical Assistance products) from the 8 health plans that provide nearly all of the non-VA insurance coverage in the state. The data do not include the uninsured, patients who pay out-of-pocket, or patients covered by Medicaid/Medicare fee-for-service products. Data were collected by health plans in a standard way using a combination of Healthcare Effectiveness Data and Information Set (HEDIS) and MNCM specifications. Some measures were reported exclusively from administrative (health plan) data, whereas others (eg, diabetes) required on-site review of random samples of individual clinical records. Record reviews of a random sample of at least 60 adults with diabetes receiving care at each physician group were conducted by trained nurse reviewers following a standardized protocol.

Physician groups are defined by MNCM convention and are mutually exclusive. Groups do not share providers. One group, organized as a staff model group, has patients primarily from 1 health plan, providing care for approximately onethird of that plan’s enrollees. Other groups serve enrollees from multiple plans.

Physician group samples are identified through a series of steps. Individual health plans identify patients meeting denominator inclusion criteria per HEDIS specifications. Patients then are assigned to physician groups by each health plan using attribution logic based on evaluation and management and preventive care billing codes. These codes identify the frequency of patient visits to a physician group. Patients are attributed to the physician group they visited most frequently during the measurement year. If patients visited 2 or more groups with the same frequency, the patient is attributed to the physician group visited most recently. Validation processes are part of data collection, aggregation, and reporting. Detailed information about MNCM procedures is available at www.mnhealthcare.org.

Data available from MNCM consisted of deidentified patient- level diabetes performance data for care in year 2005 along with information to allow assignment of each patient to a specific health plan of enrollment, physician group providing care, and insurance product category of commercial, Medicaid, or Medicare. Patients dually enrolled in both Medicaid and Medicare were excluded from this analysis to simplify the comparisons.

Person-level diabetes performance measures consisted of glycosylated hemoglobin (A1C) level, systolic and diastolic BP, low-density lipoprotein cholesterol (LDL-C) values, documented tobacco-free status, and regular aspirin use for those over age 40 years. In addition to these individual measures, MNCM also reports an all-or-none composite measure, Optimal Diabetes Care, which is coded as positive for each patient meeting all 5 criteria: A1C ≤7%, BP <130/80 mm Hg, LDL-C <100 mg/dL, documented tobacco-free status, and regular aspirin use for those over age 40 years.32 The composite measure was coded as a failure for those not meeting all 5 measures or having missing data on any individual measure. However, those ineligible or contraindicated for aspirin use were coded as compliant. Patients with no test value within a year were classified as noncompliant. Raw data files consisted of the universe of diabetes patients sampled for chart review (n = 8401), reduced for analytic purposes by excluding patients dually enrolled in Medicaid and Medicare (n = 683), those lacking a physician group code (n = 524), and those with values outside of a clinically tenable range (eg, A1C <2%, LDL-C <10 mg/dL, diastolic BP <30 mm Hg) (n = 25). The resulting analytic file consisted of 7169 deidentified patient records.

Analysis

Figure

All measures were analyzed as dichotomies to be in line with clinical goals desired in 2005 (A1C ≤7%, BP <130/80 mm Hg, LDL-C <100 mg/dL, documented tobacco-free status, and regular aspirin use for those over age 40 years) and were expressed as rates meeting the goal. Performance results were summarized by reporting the standard deviation of rates, minimum and maximum rates, and 25th/50th/75th percentiles for rates computed at the product, plan, and physician group level. Unadjusted variation by product, plan, and physician group is illustrated graphically in a bubble chart of raw rates ().

Logistic regression assessed the contributions of insurance product, health plan, and physician group to the Optimal Diabetes Care composite and its 5 components for care in 2005. Product, plan, and group were treated as fixed effects and entered into models as sets of dummy coded contrasts. The P value from likelihood ratio tests comparing a full maineffects model (effects for product, plan, group) and a model with 1 set of contrasts deleted was used to assess whether the contribution of a particular set of contrasts (eg, for health plan) was significant. Model fit was assessed with changes in Akaike’s information criterion comparing a main-effects model with 2 classification variables to a full model. Because Akaike’s information criterion penalizes fit for the number of terms added to the model, it was used in this article to suggest which term, when added to the model, improved fit the most. The area under the receiver operating characteristic curve (C statistic) was reported for a model with all 3 classification variables in the equation (product, plan, physician group) to assess the overall ability of the model to predict care quality cross the different quality measures. The unique contribution to the C statistic of each classification variable when the other 2 variables were in the equation also was reported as a descriptive assessment of the unique predictive value of that variable above and beyond the other 2 variables. Because the C statistic was not penalized for model complexity and because each classification variable involved a different number of dummy codes (product had 2, plan had 7, group had 56), the magnitudes of these increments to the C statistic could be interpreted as relative contribution to prediction.

RESULTS

Table 1

Although performance rates aggregated across product, plan, and physician group for individual measures ranged from 0.41 to 0.67, the rate for the Optimal Diabetes Care measure was only 0.08 because of the requirement of satisfying the threshold on all 5 measures for each individual in the denominator ().

Table 2

Variation in performance results is shown in and the Figure by insurance product, health plan, and physician group. The bubble area in the Figure is proportional to sample size. A plot of model-adjusted rates was not notably different from the raw rates presented in the Figure and is not presented. In Table 2, for example, rates for the 57 physician groups for A1C ≤7% ranged from 0.19 to 0.67 (SD = 0.093) with a median rate of 0.47. The Figure illustrates that the low minimum rate of 0.19 was due to an outlier group with a rate considerably lower than other groups. Insurance product rates for A1C ≤7% ranged from 0.44 to 0.59 (SD = 0.081), and health plan rates ranged from 0.35 to 0.54 (SD = 0.065). Physician group rates ranged from 0.22 to 0.62 for BP <130/80 mm Hg, from 0.24 to 0.66 for LDL-C <100 mg/dL, from 0.32 to 0.86 for documented tobacco-free status, and from 0.40 to 0.90 for daily aspirin use. The plan with the largest number of patient-level observations had the highest rate among the 8 plans for 5 of 6 measures (Figure). In addition, the physician group with the largest number of patient-level observations among the 57 groups had among the 5 highest rates for 4 of 6 measures.

Variation as measured by the standard deviation was highest for the physician group for 5 of the 6 measures. The exception was tobacco-free status, for which health plan showed the highest variation. For all measures except documented aspirin use, denominators (number of patient charts) used to compute each rate in Table 2 for insurance product level summaries ranged from 1949 to 2666 with a median of 2554. For health plan, the denominators ranged from 220 to 2038 (median = 776), and for physician group they ranged from 52 to 965 (median = 63). The Figure illustrates that Medicaid patients had poorer quality on most but not all measures. This finding continued to hold in analyses that also considered the effects of plan and physician group.

Table 3

Likelihood ratio tests all were significant (), indicating that any variable when added last to the equation significantly improved the prediction of the measure examined. For all 6 measures, the A1C value was lowest when product, plan, and group were all in the model (data not shown) compared with any other main-effects model. The A1C increment column indicates that the addition of product to the model improved the model fit the most when examining A1C ≤7%, LDL-C <100 mg/dL, and the Optimal Diabetes Care measure. Plan improved model fit the most for tobacco-free status and daily aspirin use. Group improved model fit the most for BP <130/80 mm Hg. The total C statistic column indicates that tobacco status, aspirin use, and Optimal Diabetes Care were more accurately predicted by product, plan, and group (C ≥0.69) than the other measures (C ≤0.65). Product, plan, and physician group all showed statistically significant contributions to classification of goal status for all measures.

When we analyzed our data with the large high-performing physician group removed from the analysis, our findings did not change materially. Across the 6 measures reported in Table 2, standard deviations were reduced on average by 5% at the group level, 12% at the plan level, and 9% at the product level. The relative ranks of variation as summarized by the standard deviations in Table 2 remained the same, as did the relative ranks of A1C values in Table 3. Our conclusions remain the same.

DISCUSSION

All 3 variables are important. The conclusion one draws about relative importance varies depending on what is being examined. Our prediction that physician group would have the strongest relationship to quality-of-care results was true for only 1 of 6 measures: BP control.

For all measures, there was significant variation in the results by each grouping variable, even after considering the other 2 grouping variables (eg, by medical group after considering product and plan). All-or-none scoring lowered performance scores and dampened variation, but maintained the pattern of relative variation seen with A1C, LDL-C, and aspirin use.

Medicaid performance rates were lower than those of other insurance products for 4 of 6 measures: aspirin use, documented tobacco-free status, LDL-C, and Optimal Diabetes Care. Medicaid patients had the better BP control than patients enrolled in commercial or Medicare products, and Medicaid A1C control was intermediate. Medicare performance rates were the highest for all measures with the exception of BP; differences between insurance products were smaller.

Measuring and reporting quality are necessary to improve quality, but the resources for doing so are limited. Understanding the relative contributions of insurance product, health plan, and physician group can help determine where measurement and reporting resources will have the greatest impact. Our data suggest that measurement and reporting have the potential to stimulate quality gains at each of these levels, so it is important to report at each level if possible.

Reporting quality results by health plan and by insurance product is common. Reporting by provider group is not. Insufficient observations and the potential for atypical patients to distort results limit the ability and accuracy of individual health plans to report physician group quality-of-care results. Combining data from multiple health plans has enabled MNCM to report quality-of-care results for most physician groups in the state. More than 80% of primary care physicians practice in these groups.

Two major factors drove the development of these community- wide measures in Minnesota: the need for quality-of- care measures at the physician group level and costly inefficiencies related to regulatory reporting requirements for health plans. In the year prior to pilot-testing MNCM, Minnesota health plans sampled in excess of 12,000 clinical records to fulfill their diabetes quality reporting requirements.

Despite this large number of clinical record reviews, results were unavailable for many physician groups. It was estimated that a sample sufficient to report performance of physician groups would require many fewer clinical reviews. Despite the apparent enhanced reporting capability and potential to improve quality, the MNCM model faces barriers to adoption in other regions of the country. Health plans commonly publicly report quality results, and competition for business is based partly on those results. Health plans less frequently collaborate to report physician group performance. Preference for improvement strategies capable of differentially impacting a health plan’s members and avoidance of strategies that “lift all boats” might be predicted in this situation. Indeed, many health plans sponsor telephone-based disease management programs to help differentiate their chronic care quality results, although some have suggested that physician group disease management services might be more effective and efficient.33-37 In other studies, Medicare results have demonstrated little evidence of effect from mandatory health plan reporting.38 A sole focus on health plan quality-of-care results may create a barrier to the development of physician group reporting or improvement efforts.

The data we present have several limitations. First, they are limited to diabetes. However, further investigation is possible as quality results are available for these physician groups for other conditions as well, including immunizations; breast, cervical, and colorectal cancer screening; an all-or-none cancer screening composite measure; and a range of other acute nd chronic condition measures. All are community-wide and well accepted by physician groups, who share ideas for improving care and compete for patients. Second, individual health plans were not identifiable in our data so we could not directly investigate possible explanations for the fact that health plan had a stronger effect than physician group on documented tobacco-free status and daily aspirin use. One health plan in our data had previously reported on a pay-for-performance program that was associated with significant increases in compliance with tobacco treatment guidelines.39 Some plans offered a subset of products, leading to unavoidable collinearity of product and plan in the analysis. This made it more difficult to disentangle the effects of product and plan. Third, because of uneven and sometimes thin distributions of data across combinations of product, plan, and group, the statistical models we used did not account for the possibility that the effect of physician group on performance varied by health plan influence. Plan influence on group characteristics is likely to vary with the proportion of the group’s patient covered by that plan.

One advantage of reporting community-wide quality-ofcare results is that it documents the variation in care quality across an entire community. Levels of diabetes quality of care in this community were better than levels reported nationally from 1988 to 2002. We hypothesize that if physician group reporting became as widespread as health plan reporting, it would be a positive force for community-wide healthcare improvement in other regions, but we see barriers to achieving that objective. We recommend consideration be given to creating incentives for health plans to collaborate to report physician group performance. Our results suggest such reporting could provide an extra stimulant to improve care.

Author Affiliations: From Quality Quest for Health of Illinois (GMA), Peoria, IL; and Health Partners Research Foundation (PJO, LIS, SEA, RCW, EDP, ALC), Minneapolis, MN.

Funding Source: This study was funded by HealthPartners Research Foundation grant 06-080 (Using Public Accountability Measures to Improve Quality of Care).

Author Disclosure: The authors (GMA, PJO, LIS, SEA, RCW, EDP, ALC) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article. Authorship Information: Concept and design (GMA, PJO, LIS, ALC); acquisition of data (GMA, RCW); analysis and interpretation of data (GMA, PJO, LIS, SEA, RCW, EDP); drafting of the manuscript (GMA, PJO, SEA, EDP); critical revision of the manuscript for important intellectual content (GMA, PJO, LIS, SEA); statistical analysis (SEA, EDP, ALC); obtaining funding (GMA, PJO, ALC); and administrative, technical, or logistic support (RCW).

Address correspondence to: Gail M. Amundson, MD, President and CEO, Quality Quest for Health of Illinois, 416 Main St, Ste 717, Peoria, IL 60602. E-mail: gamundson@qualityquest.org.

1. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645.

2. Khunti K. Use of multiple methods to determine factors affecting quality of care of patients with diabetes. Fam Pract. 1999;16(5):489-494.

3. Solberg LI, Asche SB, Pawlson LG, Scholle SH, Shih SC. Practice systems are associated with high quality care for diabetes. Am J Manag Care. 2008;14(2):85-92.

4. Friedberg MW, Coltin KL, Pearson S, et al. Does affiliation of physician groups with one another produce higher quality primary care? J Gen Intern Med. 2007;22(10):1385-1392.

5. McKinlay JB, Link CL, Freund KM, Marceau LD, O’Donnell AB, Lutfey KL. Sources of variation in physician adherence with clinical guidelines: results from a factorial experiment. J Gen Intern Med. 2007;22(3):289-296.

6. Adams A, Buckingham CD, Lindenmeyer A, et al. The influence of patient and doctor gender on diagnosing coronary heart disease. Sociol Health Illn. 2008;30(1):1-18.

7. Elster A, Jarosik J, VanGeest J, Fleming M. Racial and ethnic disparities in health care for adolescents: a systematic review of the literature. Arch Pediatr Adolesc Med. 2003;157(9):867-874.

8. Tuerk PW, Mueller M, Egede L. Estimating physician effects on glycemic control in the treatment of diabetes: methods, effects sizes, and implications for treatment policy. Diabetes Care. 2008;31(5):869-873.

9. Pham HH, Schrag D, Hargraves JL, Bach PB. Delivery of preventive services to older adults by primary care physicians. JAMA. 2005;294(4):473-481.

10. Shortell SM, Rundall TG, Hsu J. Improving patient care by linking evidence-based medicine and evidence-based management. JAMA. 2007;298(6):673-676.

11. Asch SM, McGlynn EA, Hogan MM, et al. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Intern Med. 2004;141(12):938-945.

12. Gillies RR, Chenok KE, Shortell SM, Pawlson G, Wimbush JJ. The impact of health plan delivery system organization on clinical quality and patient satisfaction. Health Serv Res. 2006;41(4 pt 1):1181-1199.

13. Schneider EC, Zaslavsky AM, Epstein AM. Quality of care in forprofit and not-for-profit health plans enrolling Medicare beneficiaries. Am J Med. 2005;118(12):1392-1400.

14. Zaslavsky AM, Hochheimer JN, Schneider EC, et al. Impact of sociodemographic case mix on the HEDIS measures of health plan quality. Med Care. 2000;38(10):981-992.

15. Brown AF, Gregg EW, Stevens MR, et al. Race, ethnicity, socioeconomic position, and quality of care for adults with diabetes enrolled in managed care: the Translating Research into Action for Diabetes (TRIAD) study. Diabetes Care. 2005;28(12):2864-2870.

16. Landon BE, Schneider EC, Normand SL, Scholle SH, Pawlson LG, Epstein AM. Quality of care in Medicaid managed care and commercial health plans. JAMA. 2007;298(14):1674-1681.

17. Fowler FJ Jr, Gallagher PM, Anthony DL, Larsen K, Skinner JS. Relationship between regional per capita Medicare expenditures and patient perceptions of quality of care. JAMA. 2008;299(20):2406-2412.

18. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273-287.

19. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111-123.

20. Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med. 2008;148(2):160-161.

21. Club Diabete Sicili@. Five-year impact of a continuous quality improvement effort implemented by a network of diabetes outpatient clinics. Diabetes Care. 2008;31(1):57-62.

22. Centers for Medicare & Medicaid Services. Medicare. http://www.cms.hhs.gov/center/quality.asp. Accessed May 6, 2009.

23. National Committee for Quality Assurance. NCQA Web site. http://www.ncqa.org/tabid/60/default.aspy. Accessed May 6, 2009.

24. Minnesota Community Measurement. Minnesota HealthScores. http://www.mncm.org/site. Accessed May 6, 2009.

25. Massachusetts Health Quality Partners. MHQP Web site. http://www.mhqp.org/default.asp?nav=010000. Accessed May 6, 2009.

26. Wisconsin Collaborative for Healthcare Quality. WCHQ Web site. http://www.wchq.org/. Accessed May 6, 2009.

27. California Office of the Patient Advocate. Health care quality report card. http://www.opa.ca.gov/report_card/. Accessed May 6, 2009.

28. ConsumerHealthRatings.com. Healthcare ratings directory. http://www.consumerhealthratings.com/index.php. Accessed May 6, 2009.

29. Solomon LS, Zaslavsky AM, Landon BE, Cleary PD. Variation in patient-reported quality among health care organizations. Health Care Financ Rev. 2002;23(4):85-100.

30. O’Connor PJ, Rush WA, Davidson G, et al. Variation in quality of diabetes care at the levels of patient, physician, and clinic. Prev Chronic Dis. 2008;5(1):A15.

31. Nolan T, Berwick D. All-or-none measurement raises the bar on performance. JAMA. 2006;295(10):1168-1170.

32. Institute of Medicine. Performance Measurement: Accelerating Improvement. Washington, DC: The National Academies Press; 2006.

33. Kimura J, Dasilva K, Marshall R. Population management, systemsbased practice, and planned chronic illness care: integrating disease management competencies into primary care to improve composite diabetes quality measures. Dis Manag. 2008;11(1):13-22.

34. Dorr DA, Wilcox A, Burns L, Brunker CP, Narus SP, Clayton PD. Implementing a multidisease chronic care model in primary care using people and technology. Dis Manag. 2006;9(1):1-15.

35. Casalino LP. Disease management and the organization of physician practice. JAMA. 2005;293(4):485-488.

36. Selby JV, Scanlon D, Lafata JE, Villagra V, Beich J, Salber PR. Determining the value of disease management programs. Jt Comm J Qual Saf. 2003;29(9):491-499.

37. Mattke S, Seid M, Ma S. Evidence for the effect of disease management: is $1 billion a year a good investment? Am J Manag Care. 2007;13(12):670-676.

38. Bundorf K, Kavita C, Baker L. The effects of health plan performance measurement and reporting on quality of care for Medicare beneficiaries. Paper presented at: Economics of Population Health: Inaugural Conference of the American Society of Health Economists; June 2006; Madison, WI.

39. Amundson G, Solberg LI, Reed M, Martini EM, Carlson R. Paying for quality improvement: effect on compliance with tobacco cessation guidelines. Jt Comm J Qual Saf. 2003;29(2):59-65.

Related Videos
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo