Publication
Article
The American Journal of Managed Care
Author(s):
Mattke et al1 pose an important question for the disease management (DM) industry in their article entitled “Evidence for the Effect of Disease Management: Is $1 Billion a Year a Good Investment?” They document few high-quality large-scale studies on the subject that use rigorous research designs and note that the peerreviewed literature reveals consistent clinical improvements but variable economic outcomes in DM. Although DM seems to improve quality of care, the authors conclude that its effect on cost is uncertain. Given that DM has been promoted for 15 years with significant penetration into the US healthcare fabric,2 how is it possible that there is widespread adoption of something that seems to have so little justification in the medical literature? Moreover, might DM still be a wise investment, despite the current state of research evidence?
There are good reasons why the the peer-reviewed literature has been slow to validate DM. First, purchasers of DM programs do not require randomized controlled trial (RCT) studies to satisfy their financial expectations for DM; most are sufficiently convinced of the economic benefits of these programs that they are unwilling to exclude any part of their population from the interventions to form a control group for an RCT study design.
Second, buyers of DM services are not in the business of proving new science but in managing a health plan population or their workforce. They base daily decisions on business intuition and common sense, appreciating also that DM outcomes evaluation is an evolving discipline. More credible evidence for the economic effect of DM is accumulating over time, thanks in large part to efforts by the DMMA: The Care Continuum Alliance to standardize evaluation methods for DM programs when RCTs are not feasible.3
Third, even when buyers are aware of literature that demonstrates inconclusive results for multiple programs, they still are willing to believe that their individual program results are valid once they pass review by their actuaries, consultants, vendors, or chief financial officer.4 Disease management is a diverse set of interventions applied to a heterogeneous population and cannot be regarded as having a fixed “dose-response curve” like a new drug.5 Just because DM results, in aggregate, do not make the compelling case that savings are invariably achieved does not mean that a specific DM program cannot achieve substantial savings.6 Goetzel,7 Krause,8 Bachler,9 Bodenheimer,10 Beaulieu,11 Sidorov,12 Villagra,13 and their colleagues have demonstrated that some DM programs do indeed save money.
Are those adopting DM programs guilty of making leaps of faith in the absence of indisputable scientific support for DM? The answer is yes only if a leap of faith is defined as any decision made in the absence of compelling RCT evidence, which does not exist for most of what physicians do in everyday practice.14 Scientists recognize a hierarchy of evidence, starting with case studies at the low end and extending to RCTs and meta-analyses at the top end, with case-control and other observational designs along the way.15 Many researchers have identified problematic evidence from RCTs and meta-analyses. Tunis et al16 called for greater numbers of “practical clinical trials” to better meet the needs of decision makers with regard to healthcare delivery and policy. Westfall and colleagues note that “What is efficacious in randomized clinical trials is not always effective in the real world of day-to-day practice.”17(p404) “Practice-based research provides the laboratory that will help generate new knowledge and bridge the chasm between recommended care and improved health.”17(p406) Horn and Gassaway conclude: “Although RCTs are important to confirm whether a new treatment causes an effect, they are unlikely to discover combinations of interventions or practices that are effective and efficient in routine care.”18(pS50) There is a sizable body of accumulated DM outcomes,19 despite the paucity of published RCT evidence.
Let us return to the question of whether the current excess of $1 billion spent annually for DM is a good investment. Let us suppose that it is eventually shown by replicated RCTs that, in aggregate, DM programs consistently improve clinical outcomes, quality of life, functional status, and worker productivity but do not invariably produce cost savings. Might it still be the case that DM is consistently cost-effective? If that were the case—and many DM experts believe that it is plausible— it would be noteworthy because little of what physicians do to patients is ever cost saving (albeit life saving). Medicare is not allowed to consider cost-effectiveness in approving new technology for reimbursement, and the US Food and Drug Administration must approve any new drug shown to be safe and effective regardless of cost or comparative effectiveness. Few would question whether health plans should conduct case management, whether hospitals should provide discharge planning, or whether physicians should educate patients about prevention and healthful lifestyles, but none of these accepted health interventions have been shown by replicated RCT evidence to be consistently cost saving or cost-effective, to my knowledge.
Perhaps Mattke et al consider their title question a rhetorical one. I regard it as a serious one that merits a comprehensive accounting of all the facts and circumstances, including (as well as beyond) peer-reviewed research. Health and DM programs have been in rapid evolution during the past decade, and—with the continued convergence of the electronic health record, the personal health record, and pay for performance, as well as the anticipated emergence of the patientcentered home—there are exciting opportunities ahead for further refinement of best practices in chronic care management for yet improved outcomes. There are sound reasons why DM outcomes satisfy buyers today, even if academics remain unconvinced. Taking all the data and circumstances as a whole, it is reasonable and responsible to conclude that we are wise to continue investments in DM, while accumulating more and better evidence about the total population effects.
Gordon K. Norman, MD, MBA
Alere Medical, Inc.
Irvine, California
Author Disclosure: Dr. Norman is an employee of Alere Medical, Inc, a health and disease management company. He also is a member of the board for DMMA: The Care Continuum Alliance.
Address correspondence to: Gordon K. Norman, MD, MBA, Alere Medical, Inc, 4 Park Plaza, Irvine, CA 92614. E-mail: gnorman@alere.com.
References
2. Matheson D, Psacharopoulos D, Wilkins A; Boston Consulting Group. Realizing the promise of disease management: payer trends and opportunities in the United States. 2006. www.bcg.com/publications/files/Realizing_the_Promise_of_Disease_Management_Feb06.pdf. Accessed January 10, 2008.
4. MacStravic S. Does disease management work yet? www.worldhealthcareblog.org/2007/04/13/does-disease-management-work-yet. Accessed January 10, 2008.
6. MacStravic S. Does disease management save anybody money? December 9, 2007. www.worldhealthcareblog.org/category/diseasemanagement. Accessed January 10, 2008.
8. Krause DS. Economic effectiveness of disease management programs: a meta-analysis. Dis Manag. 2005;8:114-134.
10. Bodenheimer T. Disease management in the American market. BMJ. 2000;320:563-566.
12. Sidorov J, Shull R,Tomcavage J, Girolami S, Lawton N, Harris R. Does diabetes disease management save money and improve outcomes? A report of simultaneous short-term savings and quality improvement associated with a health maintenance organization–sponsored disease management program among patients fulfilling health employer data and information set criteria. Diabetes Care. 2002;25:684-689.
14. University of Sheffield School of Health and Related Research. What proportion of healthcare is evidence based? resource guide. www.shef.ac.uk/scharr/ir/percent.html. Accessed January 10, 2008.
16. Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290:1624-1632.
18. Horn SD, Gassaway J. Practice-based evidence study design for comparative effectiveness research. Med Care. 2007;45(suppl 2):S50-S57.
REPLY:
RAND Health
Cincinnati Children’s Hospital
Cincinnati, Ohio
Sai Ma, PhD
Baltimore, Maryland
Author Disclosure: Dr Mattke reports conducting research and serving as a consultant on projects for various purchasers and vendors of disease management services. Drs Seid and Ma report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this letter.
Address correspondence to: Soeren Mattke, MD, DSc, RAND Health, 1200 S Hayes St, Arlington, VA 22202. E-mail: mattke@rand.org.