Publication

Article

The American Journal of Managed Care

May 2004
Volume10
Issue 5

Use of Formal Benefit/Cost Evaluations in Health System Decision Making

Objectives: To examine actual use of formal benefit/cost and benefit/risk results in health system decision making by public and private healthcare organizations.

Study Design and Methods: A direct survey with questions about healthcare decisions made by the respondent or the respondent’s organization. The scope of this survey precluded meaningful quantitative analysis, thus descriptive and qualitative analyses were performed.

Participants and Methods: An initial questionnaire was tested in 2001 with 15 respondents in 4 countries. In 2002, a revised questionnaire was sent to a convenience sample of 116 individuals representing information users (providers, payers, and regulators) and information producers (technology firms and academics) in France, Sweden, the United Kingdom, and the United States. Responses were received from 104 people (89.7%).

Results: Every information user employed benefit/risk analyses to accept or reject new interventions and delete existing technologies. In addition, 42.1% of information users also used formal benefit/ cost results (cost effectiveness, cost benefit, and/or cost utility). Seven providers/payers in the United States, 1 in France, and 1 in the United Kingdom required such analyses, as did 1 UK regulator. Most did not produce their own analyses but relied on those of public organizations (eg, Food and Drug Administration, National Institute of Clinical Effectiveness), academics, and pharmaceutical firms.

Conclusions: A surprisingly high percent of information users (42.1%) employed any formal economic cost-effectiveness, costbenefit, or cost utility analysis, CEA, CBA, or CUA evaluations in deciding whether to accept, pay for, or reject new interventions or to delete old interventions. Still, this figure was substantially higher than expected given the results of previous studies, nearly all of which found low use of formal benefit/risk and benefit/cost analyses.

(Am J Manag Care. 2004;10:329-335)

The few economic evaluations of health services prior to the mid-20th century focused on public health issues rather than on medical care services, the focus of most current evaluations.1,2 Increasingly sophisticated questions, methods, and analyses aided impressive growth of formal economic evaluations (eg, cost effectiveness, cost benefit, cost utility) over the past 2 decades.3 Such evaluations commonly occur with or after clinical and quality-of-life evaluations of health and medical technologies and population-based health improvement programs. However, it remains uncertain whether increased interest and increased numbers of formal benefit/cost analyses are accompanied by increased use of such research results in actual healthcare decision making.

The limited evidence available suggests slow adoption of economic evaluations for use in decision making. The main reasons seem to be a disconnect in the vernaculars of producers and users and concerns about reliability of results, especially related to issues of study transparency, timeliness, and study relevance to individual organizations. The recent report by Drummond and colleagues clearly delineates the issues; it offers guidance and recommendations to help decision makers incorporate results of economic analyses in their policy-making processes.4

One important concern is that results of the many completed benefit/cost evaluations are seldom used to help medical services delivery or health system decision making, and, when they are used, the application is often incorrect.5,6 Many have lamented infrequent use of these results and, with rare exception, their negligible impact on health system and medical care decision making at national and local levels.5-7 Multiple reasons have been suggested for low use of benefit/cost results, reasons similar to those given for the low use of clinical evidence and technology assessments.8-13 Most of the above cited literature, though, is based primarily on opinions and perceptions of how people and organizations make healthcare decisions, from implementing new technology to paying for services to controlling expenditure growth.7,11-13 However, there are few empirical data directly supporting or rejecting these opinions and quantifying directly the actual role, use, and direct effects of economic evaluations in healthcare decision making, either at national, individualsector, or health system level. The goal of this study was therefore to examine the extent to which economic evaluations are actually being used as an aid to healthcare decision making by public and private healthcare organizations.

STUDY DESIGN

A questionnaire was developed that asked about use of specific benefit/risk and benefit/cost techniques, results, and decision outcomes, plus other benefit, risk, and economic data, in the acceptance or rejection of new or the deletion of existing interventions. Questions elicited specific examples in which formal economic evaluations had been used during 2001 and 2002 to decide whether to accept, pay for, or reject new interventions or to delete old diagnostic and treatment interventions previously approved and reimbursed. The instrument was tested with 15 experts in economic evaluation and policy. These experts were located in France, Sweden, the United Kingdom, and the United States, and all were known by the investigator. Their recommended changes to the questionnaire were incorporated, including additional and more detailed questions and the inclusion of benefit/risk questions. At the investigator's request, these 15 people recommended 116 potential respondents. The only criterion was that participants had to be decision makers knowledgeable about use of benefit/risk and benefit/cost results, or lack thereof, in their organizations.

Two categories of organizations were surveyed: information users (providers, payers, and regulators) and information producers (technology firms and academics). With the widest range of participants from multiple health care sectors, this was the largest direct survey undertaken to date on the use of benefit/cost evaluations in healthcare decision making. The working hypothesis was that this survey would confirm results of previous studies of fewer participants.

respondent's organization

The questionnaire had 4 parts. First were questions about actual decisions made by the respondent to accept, reject, pay for, or delete new or existing technologies. A second set of questions asked about actual decisions made by the to accept, reject, pay for, or delete new or existing interventions. The third part of the questionnaire requested examples of both types of decisions made during 2001 and 2002. The final section inquired about reasons for the decisions to accept, reject, pay for, or delete technologies; about changing indications for use of previously accepted technologies.

RESULTS

Between May 1 and December 31, 2002, the revised survey was administered in person (26.9%), over the telephone (72.1%), and in 2 cases by e-mail (1.9%) to the population of 116 people in France, Sweden, the United Kingdom, and the United States. Responses were received from 104 participants (89.7%). Before this study, the investigator knew fewer than half the respondents personally. Only 3 organizations contributed more than 1 participant: 2 each from a European payer, a US regulator, and a US provider/payer. All who declined to participate did so solely because their organizations forbade response.

The sample size and its breadth, wide range of responses, types of organizations recruited, and responsibilities of participants precluded meaningful quantitative analyses across organizations and countries. Consequently, analyses were mainly descriptive and qualitative.

Respondents came from private and public sectors, for-profit and not-for-profit organizations (Table 1). They were between 35 and 61 years old, and 17.3% (n = 18) were female. All were senior decision makers in their organizations. Information users who responded had substantial responsibility for accepting, rejecting, or deleting new or existing technology and interventions. In addition, 31.7% (n = 33) of total respondents (n = 104) were also responsible for payment decisions. Four (11.4%) of the 35 participating US providers/private payers were from private health insurance firms, 25 (71.4%) worked for combined insurance and service providers (eg, managed care organizations or MCOs), and the remaining 6 (17.1%) represented public insurers. Among French providers/payers, 1 was from a private insurance firm and 6 from public insurers/payers. Among public organizations such as the Food and Drug Administration (FDA), payment and acceptance decisions generally were made by groups with the organization at different times, and payment decisions were made after benefit/risk acceptance decisions.

Use of Benefit/Risk Results

All respondents confirmed that their organizations used benefit/risk results in essentially all decisions to accept, reject, pay for, or delete technology. However, a majority (86.5%) of all 104 respondents never produced their own formal analyses but relied on those of public (eg, the FDA) or private organizations (eg, pharmaceutical firms) and on published research results. Everyone acknowledged that their organizations used relative rather than absolute benefit/risk ratios (ie, comparisons among interventions with similar uses) in nearly every decision to accept or reject new technology, or to delete previously accepted interventions.

Nearly all respondents (93.3%) found surgical operations the most difficult to evaluate, mainly because of lack of control trials establishing benefit and risk. Discontinuing existing operations found to be less effective or more dangerous than a new surgical procedure was equally difficult. Respondents were concerned particularly that new, often high-risk, and usually costly operations were introduced with little evaluation of benefits and risks or the economic effects on patients, health systems, and payers.

With the exception of participating academic organizations, 2 US MCOs, and 1 Swedish regulatory agency, every organization that responded had written benefit/ risk criteria for adoption and rejection. None had specific benefit/risk thresholds for either acceptance or rejection; rather, acceptable ratios varied by severity of illness, treatment risk, and alternatives available. At the same time, nearly all providers/payers noted that once the national regulatory agency accepted a new technology for use, their organizations also accepted it, eventually if not immediately. Sildenifil was the most frequently cited example of a new technology accepted on the basis of benefit/risk evaluation but the use of which was delayed because of reluctance to pay.

Every respondent noted that in most cases, although not that of pharmaceuticals, the information available to evaluate benefits and risks of new technologies varied greatly in quality and quantity. The main complaint was that data, primarily for medical devices and surgical operations, were drawn mostly from uncontrolled observational or technical studies, such as of hardware function; thus they provided no confident rates of quantifiable absolute or relative adverse outcomes or benefits. Nearly all providers and payers also noted that if a hardware technology (eg, a new type of noninvasive imaging device) appeared unlikely to cause harm and did what it was purported to do, then it was accepted. Payment, however, was not necessarily approved automatically.

Examples of interventions rejected on the basis of benefit/risk results included many complementary and alternative medicine treatments, polymerase chain reaction screening of blood supply for hepatitis C, and electrical stimulation of hand and arm for post-stroke patients. The combination weight-loss treatment of fenfluramine and phentermine was the most frequently cited example of an accepted treatment subsequently delisted because of new benefit/risk results.

In nearly all provider and payer organizations, decision makers had little or no direct responsibility for diffusion and actual use of an intervention in clinical practice after its acceptance. Ongoing evaluation was rare also, occurring only after severe adverse outcomes or the availability of new alternatives.

Medications clearly were the easiest to evaluate for benefit/risk as pharmaceutical firms are responsible in study countries for such evaluations. All responding providers/payers were confident that they could nearly always accept benefit/risk results for prescription medications on behalf of their organizations. This is not to understate respondents' concerns that randomized-control efficacy trials are rarely large enough to define all important adverse effects. They also expressed concern that, given its widely recognized problems, post-marketing surveillance is a poor substitute for trials and systematic follow-up in defining long-term benefit and risk.

Even with high benefit/risk profiles, the perception of low total benefits can influence decisions. For example, respondents from 2 US MCOs noted that gabapentin for peripheral neuropathy was not accepted onto their formularies because of insufficient benefit, even though the FDA had approved it for use.

All respondents were concerned that new technology generally added to, not substituted for, existing diagnostic tools and treatments. They were also troubled by the uncertain health benefits but cost-increasing consequences of multiple interventions for the same diagnosis. Providers, payers, and regulators noted that deleting previously accepted technology posed difficulties different from those encountered when accepting new interventions. One difficulty was the disruptive personal effect of discontinuing the use of a technology or treatment that both physician and patient perceived as beneficial, especially when effective alternatives were unavailable. A recent example was the FDA decision to overturn their previous approval of alosetron for irritable bowel syndrome because of risk of intestinal blockage. Patient complaints, some US respondents claimed, forced the FDA to re-approve alosetron for use.

Use of Benefit/Cost Results

A large majority (94.4%) of all respondents and their organizations used some form of economic evaluations in decision making (Table 2). Restricting the definition to formal benefit/cost analyses brought the number of providers, payers, and regulators (n = 78) that regularly used formal benefit/cost analyses (most commonly costeffectiveness analyses) to 42.1% (Table 3). Nearly all remaining members of these groups used such analyses occasionally.

UK respondents noted that all new technology was required to have formal benefit/cost analyses prior to decisions on acceptance and payment. Formal benefit/ cost evaluation was mandatory for 18 private payers (including 7 MCOs) in the United States and 1 private insurer in France. Interestingly, pharmaceutical and device manufacturers (ie, technology firms) reported that they did not use their firms' benefit/cost analyses in pricing products, although they did use these results in sales and marketing efforts.

Surprisingly, generalizability or transferability of results was not identified by any information user as an important issue. However, timeliness was clearly an important factor in benefit/ cost use; economic outcomes were rarely available when needed for decision making. As Wallace and colleagues found,14 service providers and payers did not use benefit/cost results in developing clinical practice guidelines. No responding provider or payer organization had a predefined benefit/cost ratio by which it would accept, reject, pay for, or delete interventions. Only 1 US MCO based its decisions to accept, reject, or pay for a new technology mainly on formal benefit/cost (value) calculations. However, 92.3% of European respondents predicted that the European Union would soon require all member nations to provide formal benefit/cost evaluations, whether for pharmaceuticals alone or for all new healthcare technology.

Respondents from the United Kingdom and elsewhere commented that the National Institute of Clinical Excellence (NICE) is the only public body in the 4 study countries currently responsible for both benefit/risk and benefit/cost evaluations. However, 2 British respondents pointed out that benefits still predominate, and expected benefit/cost ratios are used mainly to confirm benefit/risk decisions. Two respondents noted that Sweden would independently require formal benefit/cost (likely cost/quality-adjusted life-year [QALY]) analyses, after acceptance based on benefit/risk, for all pharmaceuticals, probably within the next 2 years. The United Kingdom, Ontario, and Australia already have these requirements. One respondent also noted that in deciding to accept and pay for computerized axial tomography scans in 1975, Sweden became the first country to make a public decision in healthcare based on formal economic evaluation.

Those responsible for payment may still reject a new intervention even if others in their organization accept it based on benefit/risk outcomes. Examples of technology accepted but not paid for, at least initially, included orlistat for weight reduction, electrical stimulation for nonunion of bone fractures, islet cell and pancreas transplantation for diabetes mellitus, electron-beam computed tomography to diagnose coronary artery calcification, continuous glucose-monitoring devices, chemosensitivity assays for oncology chemotherapy, and vertical axial decompression for back pain. Less common examples provided by French insurers and some US MCOs included screening densitometry to diagnose osteoporosis or laser keratotomy to improve vision acuity. At the other extreme, many US MCOs and insurers would not pay for the nicotine patch unless accompanied by antismoking group therapy, a clear example of willingness to pay more for increased benefit.

The most commonly cited examples of interventions initially denied payment for benefit/cost reasons but subsequently accepted were positive emission tomography scans and chiropractic treatments for back and neck pain. US providers/payers were the most aggressive in not paying for specific interventions based on benefit/cost criteria. Definitions of medical necessity, the experimental nature of some new interventions, and contractual restrictions also had important roles in decisions to deny acceptance or refuse payment.

Two examples were raised of policy changes that bear on deletion of interventions previously accepted and paid. Both have important implications for expanding the role of sound medical evidence in benefit/cost analysis as the basis for health system resource allocation and payment decisions. If such examples become common, best medical evidence and outcomes of appropriate formal economic evaluations will be used increasingly in helping to make such decisions. Their importance is not from whence the evidence came but rather that the evidence is applicable and obtained by appropriately designed study. First, Sweden is implementing new policies that require pharmaceutical firms to re-evaluate benefits of currently marketed prescription medications. Substantial delisting is expected. Second, a decision was made in France to reduce or stop reimbursement for pharmaceuticals of no demonstrated benefit. It is expected that 30% of such medications will be removed from payment lists in the first round, with complementary and alternative medications disproportionately affected.

DISCUSSION

Broad use of benefit/risk results in decision making was not surprising. Unexpectedly, however, a high percentage of respondents (42.1%) of information users reported use of benefit/cost results in decision making. Current literature paints a picture different from this study's results.6,7,11,15 For example, a 4-country study on actual use found large numbers of economic evaluations submitted to public decision makers but uncertainty about whether such information influenced decisions.6 Weatherly and colleagues, in their survey of 102 health authorities in England, found that cost-effectiveness results were well understood but not often used.15 Hoffmann and associates found that UK health authorities recognized the usefulness of published economic evaluations but were concerned about narrow research questions, poor generalizability of results, and uncertain methodological rigor.11

A case study by Pausjenssen et al directly measured actual use of economic analyses by the Drug Quality and Therapeutics Committee of the Ontario (Canada) Drug Benefit Program.16 Surveys of 9 members after each of 12 committee meetings concluded that economic analyses affected decisions but mainly on highcost innovative medications; otherwise, clinical outcomes predominated.

Bethan and colleagues indirectly measured use of economic evaluations in Australia. They found that benefit/ cost standards required for licensing and payment of pharmaceuticals were closely related to decisions to accept and pay for new pharmaceuticals.17 The Pharmaceutical Benefits Advisory Committee was unlikely (2/26 submissions) to recommend a medication for acceptance and reimbursement if incremental cost per life-year saved was greater than AU $76 000, and unlikely (1/26 submissions) to reject one if incremental cost per life-year saved was less than AU $42 000.

Raftery's evaluation of early implementation of guidelines found that with only 1 exception NICE recommended for approval and payment every new technology with a cost/QALY less than £31 000.18 Neither Raftery nor Bethan et al found clear and conscious use of specific or preselected costs per outcome, but both studies had results similar to the widely accepted 2-decade standard of US $50 000 per life-year saved, a standard that is itself being questioned.19,20 Thus, these studies indicate the use of economic evaluations by multiple decision makers.10

There was no disagreement among respondents that all organizations involved in producing benefit/cost evaluations, including pharmaceutical and other technology firms and private consultants, can do appropriate and methodologically sound benefit/risk, benefit/cost, and clinical-outcome studies. However, respondents expressed concerns about the outcomes of economic evaluations by these firms because such studies nearly always yield findings positive to funding firms. As 1 French respondent asked, "Why are there so infrequently studies with negative results, and why do such studies nearly always show I am going to save money?"

Conflict

Two conflicting themes emerged from responses of information users and academics. On the one hand, most respondents (65.4%) agreed that a large percentage of benefit/cost studies were generally methodologically sound enough to help decision makers, even though nearly all agreed that such studies could be even better methodologically. At the same time, 89.2% of providers, payers, regulators, and academics felt there was insufficient transparency, of methods and analyses mainly, to judge quality and results of benefit/cost studies. Additionally, economic outcomes were rarely available when needed, and most decision makers were not knowledgeable enough to evaluate the quality of methods and data used. Narrowly focused decision analytic models were the most troublesome because research questions, assumptions, logic, technical details, and variable interpretations were often difficult to understand and relate to results. The comments on decision modeling are different from those of a recently published survey on the use of pharmacoeconomic models in decision making.21

No respondents noted the conflict between general acceptance of benefit/risk results produced by firms and widespread suspicion of benefit/cost studies. It may be that financial implications from benefit/risk studies are less powerful incentives to make results appear more beneficial and less risky than they are, and less variable in interpretation, while direct economic effects from benefit/cost results are clear for providers/payers (expected increased budgets) and pharmaceutical and device firms (increased sales and profits).

Options for Solution

A first step toward resolving the issue of underuse of formal economic evaluations in healthcare decision making is for decision making groups (mainly payers) to require its use. A second option is for payers to require adherence to accepted standards for benefit/cost analysis and reporting, improve applicability to user needs, and address concerns about existing benefit/cost evaluation.22-24 A desirable side effect may be increased use of all—evidence-based care.25 Next, independent expert reviewers should examine benefit/cost studies for methodological soundness and appropriateness of results, as government regulators do now with benefit/risk studies. Then these benefit/cost evaluations can be confidently used as a basis for acceptance (in conjunction with benefit/risk results) and payment. Such policies would increase the likelihood of better clinical and economic decision making and resource allocation, thereby improving value for private and public money spent on health and medical services.25

The main limitation of this study was the sample selected, small groups of selected persons in 4 countries— certainly no random probability sample. Although the results may not be completely generalizable, it is likely that the main conclusion is valid.

CONCLUSION

Economic considerations are important factors in decisions to accept and pay for health and medical services. Though they considered the current methodologies and variability of information less than ideal, respondents indicated widespread and growing use of multiple formal economic data and evaluations in healthcare decision making.

From the Departments of Medicine and Health Care Systems and the Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia.

Address correspondence to: Bernard S. Bloom, PhD, University of Pennsylvania, 3615 Chestnut Street, Philadelphia, PA 19104-2676. E-mail: bsbloom@mail.med.upenn.edu.

The Economic Writings of Sir William Petty.

1. Petty W. Hull CH, ed. Cambridge: Cambridge University Press; 1948.

Report of the Sanitary Commission of Massachusetts. 1850.

2. Shattuck L. Cambridge, Mass: Harvard University Press; 1948.

Med Care.

3. Elixhauser A, Halpern M, Schmier J, Luce BR. Health care CBA and CEA from 1991 to 1996: an updated bibliography. 1998;36(suppl 5):MS18-ms147.

Value Health

4. Drummond M, Brown R, Fendrick AM, Fullerton P, Neumann P, Taylor R, Barbieri M. Use of pharmacoeconomic information–report of the ISPOR task force on use of pharmacoeconomic/health economic information in health-care decision making. . 2003;6:407-416.

Value Health.

5. Drummond M, Dubois D, Garattini L, et al. Current trends in the use of pharmacoeconomics and outcomes research in Europe. 1999;2:323-332.

Expert Revue of Pharmacoeconomics and Outcomes Research.

6. Harris A, Buxton M, O'Brien B, Rutten F, Drummond M. Using economic evidence in reimbursement decisions for health technologies: experience of 4 countries. 2002;1:7-12.

Health Policy.

7. Hoffmann C, von der Schulenburg GM, for the EUROMET Group. The influence of economic evaluation on decision making: a European survey. 2000;52:179-192.

Pharmacoeconomics.

8. Bloom BS, Fendrick AM. Timing and timeliness in medical care evaluation. 1996;9:183-187.

Pharmacoeconomics.

9. Kernick DP. The impact of health economics on health care delivery: a primary care perspective. 2000;18:311-315.

Pharmacoeconomics

10. McDonald R, Haycox A, Walley T. The impact of health economics on healthcare delivery: the economists' perspective. . 2001;19:803-809.

Value Health.

11. Hoffman C, Stoykova BA, Nixon J, Glanville JM, Misso K, Drummond MF. Do health-care decision makers find economic evaluations useful? The findings of focus group research in UK health authorities. 2002;5:71-79.

Int J Technol Assess in Health Care.

12. Dobbins M, Cockerill R, Barnsley J, Ciliska D. Factors of the innovation, organization, environment, and individual that predict the influence five systematic reviews had on public health decisions. 2001;17:467-478.

Int J Technol Assess in Health Care

13. Drummond M, Weatherly H. Implementing the findings of health technology assessments. If the CAT got out of the bag, can the Tail wag the dog? . 2000;16:1-12.

J Gen Intern Med.

14. Wallace JF, Weingarten SR, Chiou C-F, et al. The limited incorporation of economic analyses in clinical practice guidelines. 2002;17:210-220.

Int J Technol Assess in Health Care

15. Weatherly H, Drummond M, Smith D. Using evidence in the development of local health policies. . 2002;18:771-781.

Pharmacoeconomics.

16. Pausjenssen AM, Singer PA, Detsky AS. Ontario's formulary committee: how recommendations are made. 2003;21:285-294.

Pharmacoeconomics.

17. Bethan G, Harris A, Mitchell A. Cost-effectiveness analysis and the consistency of decision making: evidence from pharmaceutical reimbursement in Australia (1991-1996). 2001;19:1103-1109.

BMJ.

18. Raftery J. NICE: faster access to modern treatments? analysis of guidance on health technologies. 2001;323:1300-1303.

Arch Intern Med.

19. Ubel PA, Hirth RA, Chernew ME, Fendrick AM. What is the price of life and why doesn't it increase at the rate of inflation? 2003;163:1637-1641.

Med Decis Making.

20. Hirth RA, Chernew MA, Miller E, Fendrick AM, Weissert WG. Willingnessto- pay for a quality-adjusted life year: in search of a standard. 2000;20:332-342.

J Manag Care Pharmacy

21. Olson BM, Armstrong EP, Grizzle AJ, Nichter MA. Industry's perception of presenting pharmacoeconomic models to managed care organizations. . 2003;9:159-167.

22. Hill SR, Mitchell AS, Henry DA. Problems with the interpretation of pharmacoeconomic analyses. a review of submissions to the Australian Pharmaceutical Benefits Scheme. JAMA. 2000;283:2116-2121.

Cost-effectiveness in Health and Medicine.

23. Gold ME, Siegel JE, Russell LB, Weinstein MC. New York. Oxford University Press. 1996.

Methods for the Economic Evaluation of Health Care Programmes.

24. Drummond M, O'Brien B, Stoddart GL, Torrance GW. 2nd ed. Oxford. Oxford University Press. 1997.

Milbank Q.

25. Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J, and the Knowledge Transfer Study Group. How can research organizations more effectively transfer research knowledge to decision makers? 2003;81:221-248.

Related Videos
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo