Publication

Article

The American Journal of Managed Care

September 2011
Volume17
Issue 9

Quality Measurement of Medication Monitoring in the Meaningful Use Era

Shifting from claims to integrated electronic health records to calculate quality metrics will improve reported quality attributable to data capture changes, not true quality improvements.

Objectives:

While the 2011 implementation of “meaningful use” legislation for certified electronic health records (EHRs) promises to change quality reporting by overcoming data capture issues affecting quality measurement, the magnitude of this effect is unclear. We compared the measured quality of laboratory monitoring of Healthcare Effectiveness Data and Information Set (HEDIS) medications based on specifications that (1) include and exclude patients hospitalized in the measurement year and (2) use physician test orders and patient test completion.

Study Design:

Cross-sectional study.

Methods:

Among patients 18 years and older in a large multispecialty group practice utilizing a fully implemented EHR between January 1, 2008, and July 31, 2008, we measured the prevalence of ordering and completion of laboratory tests monitoring HEDIS medications (cardiovascular drugs [angiotensin-converting enzyme inhibitors or angiotensin receptor blockers, digoxin, and diuretics] and anticonvulsants [carbamazepine, phenobarbital, phenytoin, and valproic acid]).

Results:

Measures excluding hospitalized patients were not statistically significantly different from measures including hospitalized patients, except for digoxin, but this difference was not clinically significant. The prevalence of appropriate monitoring based on test orders typically captured in the EHR was statistically significantly higher than the prevalence based on claims-based test completions for cardiovascular drugs.

Conclusions:

HEDIS quality metrics based on data typically collected from claims undermeasured quality of medication monitoring compared to EHR data. The HEDIS optional specification excluding hospitalized patients from the monitoring measure does not have a significant impact on reported quality. Integration of EHR data into quality measurement may significantly change some organizations’ reported quality of care.

(Am J Manag Care. 2011;17(9):633-637)

Integration of electronic health record (EHR) data into quality measurement may significantly change some organizations’ reported quality of care.

  • For example, measuring laboratory monitoring quality from claims data (test completion) underestimates care quality compared with measurements based on EHR data (physician test ordering).

  • If quality-of-care measurements improve concurrent with “meaningful use” implementation, it may be difficult to discern whether this change reflects true quality improvements or measurement changes.

  • One strategy to address this issue is to measure both claims only and claims plus EHRbased measures concurrently so that quality changes can be interpreted going forward.

The Institute of Medicine’s report Crossing the Quality Chasm1 prompted significant efforts toward the pursuit of high-quality healthcare. As a result, major investments to improve healthcare quality have focused on 2 areas: (1) the development and public reporting of quality-of-care measures and (2) the promotion and adoption of electronic health records (EHRs).2 The synergy of these 2 concurrent efforts was recently accelerated by the 2011 implementation of incentive payments for the meaningful use of certified EHR technology under the 2009 American Recovery and Reinvestment Act3; this synergy will have an important impact on healthcare in the United States.

The Centers for Medicare & Medicaid Services’ stage 1 rollout of “meaningful use” criteria for EHR in January 2011 aimed to reduce disparities in EHR use across healthcare deliverers by providing monetary incentives for EHR adoption.4 While one of the goals in promoting the widespread adoption of EHR is to improve quality of care,5 there is evidence to suggest that expanded EHR availability and meaningful data integration may impact quality based on changes in data capture and measurement, and not based on actual improvements in healthcare quality.2,6-8

For example, prior studies have shown that quality measures calculated from administrative claims alone are different from measures that incorporate medical record data.2,6-8 As a result, “hybrid” administrative-medical record—based calculation methods were incorporated into some, but not all, Healthcare Effectiveness Data and Information Set (HEDIS) performance measures.7 The implication is that healthcare enterprises without EHRs are disadvantaged,9-11 while those equipped to readily capture medical record data for quality reporting have an advantage by being able to report higher numbers for performance measures than those using only administrative claims. While this phenomenon has been described for cancer screening and vaccination rates,2,6-8 it has not been examined for the quality measurement of the laboratory monitoring of medications.

Since drug-induced injury is common12,13 and failure to monitor high-risk medications is one of the leading factors contributing to adverse drug events,13 the National Committee for Quality Assurance included medication monitoring measures in HEDIS in 2006.14 These standards recommend laboratory monitoring of high-risk, narrow therapeutic window medications, including angiotensin-converting enzyme (ACE) inhibitors, angiotensin II receptor blockers (ARBs), digoxin, diuretics, and anticonvulsants (phenobarbital, phenytoin, valproic acid, and carbamazepine)15 based on administrative data only. Aware of the difficulties with data capture across transitions of care from the hospital to the ambulatory setting, HEDIS optionally specifies that measurement may be affected by population selection (ie, excluding or including hospitalized patients whose hospital laboratory tests are not consistently reported to ambulatory medical records or claims), but it does not specify that measurement may be affected by the source of the data.

Because the magnitude of data source and population selection effects is unclear, we conducted this study to assess the ordering and completion of laboratory tests for high-risk HEDIS medications in a large multispecialty group practice to compare the reported quality of monitoring based on (1) 2 HEDIS specifications for the population (including and excluding patients hospitalized in the measurement year) and (2) 2 outcome measures (physician test orders vs actual completion of tests). With the federal investment promising to eliminate barriers to EHR adoption, our findings have implications for quality-of-care reporting and measurement, and will inform some expected developments resulting from the EHR meaningful use legislation.

METHODS

Study Setting and Population

This study was conducted in a large multispecialty group practice that provides the majority of medical care to members of a closely associated New England—based health plan. The group practice employs 250 outpatient physicians at 30 ambulatory clinic sites. The practice uses the EpicCare Ambulatory EHR system and provides medical care to approximately 180,000 individuals. Patients had to be continuously enrolled during the observation period and not residing in a long term care facility. Data about medication exposure were derived from the prescription drug claims of the health plan. Data about laboratory test ordering and completion were derived from the multispecialty group EHR. The age and sex characteristics of the study population were similar to those of the general population of the United States in 200016: 54% of the adults were female and 36% were over 65 years of age. The group practice has only recently begun to capture race and has incomplete data, but the health plan’s market research indicates a patient racial mix consistent with the plan’s catchment area, which includes whites (79%), Hispanics (12%), African Americans (5%), and other races (4%).

HEDIS Drugs and Recommended Monitoring Tests

Table 1

We used drug dispensing claims from January 1, 2008, to July 31, 2008, to identify the first dispensing of 1 of the highrisk medications for a patient on or after January 1, 2008. We used drug dispensing claims from January 1, 2007, to December 31, 2007, as a look-back period in order to identify patients who were taking a drug for at least 180 days, as specified by HEDIS guidelines; we included only patients with evidence of another drug dispensing in the 180 days prior to January 1, 2008 (). For the study drugs, appropriate annual monitoring was defined as receipt of a serum potassium test and either a serum creatinine or blood urea nitrogen test for ACE inhibitors/ARBs, digoxin, and diuretics, and receipt of a test for anticonvulsant drug serum concentration for the anticonvulsants. Test ordering and test completion were defined as having occurred if there was at least 1 recommended test order and test completion for each specific drug-test pair either 180 days before or after the index dispensing in 2008. To test the effect of population specification on reported quality as specified by HEDIS,12 we created 2 estimates for each drug-test combination: one based on the entire study population and a second excluding patients with any hospitalization in the observation year. To test the effect of outcome specification on reported quality, we also created estimates based on test completion.

Statistical Analysis

We used a X2 test to compare differences in monitoring based on the 2 HEDIS specifications for the population (including and excluding patients hospitalized in the measurement year) and the 2 outcome measures (physician test orders vs actual test completion). All analyses were conducted with SAS version 9.2 (SAS Institute Inc, Cary, North Carolina). This study was approved by the institutional review boards of our research institution and the participating group practice.

RESULTS

Test Ordering by Clinicians Including and Excluding Hospitalized Patients

Table 2

Approximately 10% of each population of medication users had a hospitalization in the observation year, except digoxin users, approximately 20% of whom were hospitalized (). There were no statistically significant differences in the prevalence of appropriate test monitoring when we compared estimates based on the sample including hospitalized patients with estimates based on the sample excluding hospitalized patients. For example, 93.9% of hospitalized patients prescribed digoxin had appropriate test monitoring, compared with 92.3% of patients who were not hospitalized (P = .18).

Test Ordering by Clinicians Compared With Overall Test Completion

Table 3

The prevalence of test completion for all drugs was lower than that of physician test ordering because patient adherence to test orders ranged from 85.6% to 93.3% (data not shown). When we examined the sample that included hospitalized patients and compared physician test orders with overall test completion, there were statistically significant differences between these 2 measures for the cardiovascular drugs, but not the anticonvulsants. For example, for diuretics, 92.3% of physicians ordered the appropriate monitoring test, but only 80.2% of all indicated tests were completed (P <.001; ). These differences were statistically insignificant when the drug was less commonly used.

DISCUSSION

This study examines 2 aspects of HEDIS quality measurement for medication monitoring. We found that the selection of the outcome measure can affect the reported quality of a physician. In contrast to HEDIS recommendations, it does not appear that the decision of whether or not to include hospitalized patients in the measure estimates has a significant impact. These results have implications for quality-of-care measurement.

Our findings are consistent with those of other studies that indicate that measured performance varies depending upon the source of information, whether administrative data, the EHR, or a combination.2,6-8,17 Institutions relying solely on administrative data may underreport HEDIS quality- of-care measures.6,7 For example, among 283 commercial health plans that submitted HEDIS data, 178 had a greater than 10% point difference in the prevalence of postmyocardial infarction beta-blocker use when the HEDIS measure used administrative plus medical record data compared with use of administrative data alone.7 We build on this literature by showing that it is important to disentangle physician ordering behavior from patient test completion behavior. For example, if a physician appropriately ordered a test 100% of the time, but patients failed to complete the test 20% of the time, the physician would be judged to have an 80% monitoring rate, the same as a physician who ordered a test only 80% of the time but had patients who always completed tests. If HEDIS measurements are meant to be reflective of physician quality18 regardless of patient adherence,19 then this distinction is important for quality reporting. Alternatively, if the HEDIS measure is meant to reflect the practice’s ability to achieve monitoring through proper education, provision of convenient testing, and following up with no-shows, then using administrative claims to report completion only may be appropriate.

Although HEDIS has taken the precaution of preventing potential undermeasurement of monitoring due to hospitalization, it appears that going to the effort to exclude hospitalized patients from outcome measurement does not make a significant difference. Our overall findings of the rates of testing are consistent with those of several previous studies that report variable, and sometimes low, rates of laboratory monitoring of anticonvulsant medications. For example, previous studies report that monitoring for anticonvulsant valproic acid ranged from 39.8% to 62%,20-22 while monitoring ranged from 60% to 70% for cardiovascular and nonstatin lipid-lowering drugs23 and from 75% to 90% for statins.20,23

A review of the literature24 shows that healthcare systems differ in their measurement of the medication-monitoring metric, basing their estimates on test completion rates from claims data23,25-28 in some cases and on physician test orders from EHRs in other cases.29,30 Most administrative claims—based studies include only test completion because test ordering might depend on the availability of electronic medical record data. Ordering rates necessarily exceed completion rates, so using completion rates based on administrative claims data may underreport physician performance when compared with using ordering rates based on electronic medical record data. Ordering rates, however, are likely unavailable for many providers that do not have an EHR. Because HEDIS standards do not specify which measure—ordering or completion&mdash;is required, there is the potential to compare health plan performance on an apples-to-oranges basis. A key issue is that the differences in the data elements available, and a shift in the data capture abilities due to meaningful use, will likely affect the reporting of quality measurements. Therefore, it will be important to understand whether temporal trends in quality of care are due to actual improvements in care or due to changes in quality measurement.

Limitations of our study deserve to be noted. First, our study was conducted in a single multispecialty group practice, limiting the generalizability of our findings to other settings. Second, we were unable to confirm patient adherence to drugs and therefore were unable to identify patients who did not complete tests because they were no longer using the medication.

The widespread use of EHR data for quality measurement has been delayed due to a variety of problems with interoperability, lack of standardized coding schema, and the inability to retrieve some critical data electronically. Integration of EHR data into quality metrics is only now being actively pursued by a large section of the healthcare community.4,5 Our findings, taken together with the study of Pawlson et al showing that quality ranks of health plans based on HEDIS hybrid performance measures differed from their ranks based on administrative-only data,7 suggest that reported quality of care will improve over time as integrated administrative/EHR-based measures are more widely used. Further, if quality-of-care measurements improve concurrently with EHR rollout, it will be difficult to interpret whether that reflects improvements in healthcare quality or simply differences in quality measurement methodology. One strategy to address this issue may be to measure both administrative-only and administrative-plus-EHR measures concurrently so that we can discern the changes going forward.

Acknowledgments

The authors would like to acknowledge the contributions of Devi Sundaresan, MA.

Author Affiliations: From University of Massachusetts Medical School (JT, TSF, SHF, JHG), Worcester; Meyers Primary Care Institute (JT, TSF, SJG, DJP, LDG, JHG), Worcester, MA.

Funding Source: This study was funded by grants R18 HS17203, R18 HS17817, and R18 HS17906 from the Agency for Healthcare Research and Quality.

Author Disclosures: The authors (JT, TSF, SHF, SJG, DJP, LDG, JHG) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (JT, TSF, LDG, JHG); acquisition of data (JT, TSF, SJG, DJP, LDG, JHG); analysis and interpretation of data (JT, TSF, SHF, DJP, LDG, JHG); drafting of the manuscript (JT, SJG); critical revision of the manuscript for important intellectual content (JT, TSF, SHF, JHG); statistical analysis (JT, DJP); obtaining funding (JT, TSF, JHG); and administrative, technical, or logistic support (SHF, SJG, LDG).

Address correspondence to: Jennifer Tjia, MD, MSCE, Assistant Professor of Medicine, Division of Geriatric Medicine, University of Massachusetts Medical School, Biotech Four, 377 Plantation St, Ste 315, Worcester, MA 01605. E-mail: jennifer.tjia@umassmed.edu.

1. Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.

2. Tang PC, Ralston M, Arrigotti MF, Qureshi L, Graham J. Comparison of methodologies for calculating quality measures based on administrative data versus clinical data from an electronic health record system: implications for performance measures. J Am Med Inform Assoc. 2007;14(1):10-15.

3. Centers for Medicare & Medicaid Services. Electronic health record incentive program; final rule. 42 CFR parts 412, 413, 422, and 495. Fed Regist. July 28, 2010;75(144):44314-44588.

4. Blumenthal D, Glaser JP. Information technology comes to medicine. N Engl J Med. 2007;356(24):2527-2534.

5. Hersh W, Jacko JA, Greenes R, et al. Health-care hit or miss? Nature. 2011;470(7334):327-329.

6. Maclean JR, Fick DM, Hoffman WK, King CT, Lough ER, Waller JL. Comparison of 2 systems for clinical practice profiling in diabetic care: medical records versus claims and administrative data. Am J Manag Care. 2002;8(2):175-179.

7. Pawlson LG, Scholle SH, Powers A. Comparison of administrative-only versus administrative plus chart review data for reporting HEDIS hybrid measures. Am J Manag Care. 2007;13(10):553-558.

8. Keating NL, Landrum MB, Landon BE, Ayanian JZ, Borbas C, Guadagnoli E. Measuring the quality of diabetes care using administrative data: is there bias? Health Serv Res. 2003;38(6, pt 1):1529-1545.

9. Jha AK, Desroches CM, Kralovec PD, Joshi MS. A progress report on electronic health records in U.S. hospitals. Health Aff (Millwood). 2010: 29(10):1951-1957.

10. DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in ambulatory care—a national survey of physicians. N Engl J Med. 2008;359(10):50-60.

11. Simon SR, McCarthy ML, Kaushal R, et al. Electronic health records: which practices have them, and how are clinicians using them? J Eval Clin Pract. 2008;14(1):43-47.

12. Gandhi TK, Weingart SN, Borus J, et al. Adverse drug events in ambulatory care. N Engl J Med. 2003;348(16):1556-1564.

13. Gurwitz JH, Field TS, Harrold LR, et al. Incidence and preventability of adverse drug events among older persons in the ambulatory setting. JAMA. 2003;289(9):1107-1116.

14. National Committee for Quality Assurance (NCQA). HEDIS® 2006 Summary Table of Measures and Product Lines. Washington, DC: NCQA; 2005.

15. National Council for Quality Assurance (NCQA). HEDIS® 2009: Healthcare Effectiveness Data & Information Set. Vol 1: narrative. Washington, DC: NCQA; July 2008.

16. US Census Bureau. 2000 Census of Population and Housing, Summary Population and Housing Characteristics. Part 1. PHC-1-1. Table 1. Age and sex: 2000. Washington, DC: United States Census Bureau; November 2002. www.census.gov/prod/cen2000/phc-1-1-pt1.pdf. Accessed August 4, 2011.

17. Kerr EA, Smith DM, Hogan MM, et al. Comparing clinical automated, medical record, and hybrid data sources for diabetes quality measures. Jt Comm J Qual Improv. 2002;28(10):555-565.

18. Scholle SH, Roski J, Adams JL, et al. Benchmarking physician performance: reliability of individual and composite measures. Am J Manag Care. 2008;14(12):833-838.

19. Kerr EA, Krein SL, Vijan S, Hofer TP, Hayward RA. Avoiding pitfalls in chronic disease quality measurement: a case for the next generation of technical quality measures. Am J Manag Care. 2001;7(11):1033-1043.

20. Raebel MA, Lyons EE, Andrade SE, et al. Laboratory monitoring of drugs at initiation of therapy in ambulatory care. J Gen Intern Med. 2005;20(12):1120-1126.

21. Raebel MA, Chester EA, Newsom EE, et al. Randomized trial to improve laboratory safety monitoring of ongoing drug therapy in ambulatory patients. Pharmacotherapy. 2006;26(5):619-626.

22. Raebel MA, Lyons EE, Chester EA, et al. Improving laboratory monitoring at initiation of drug therapy in ambulatory care: a randomized trial. Arch Intern Med. 2005;165(20):2395-2401.

23. Palen TE, Raebel M, Lyons E, Magid DM. Evaluation of laboratory monitoring alerts within a computerized physician order entry system for medication orders. Am J Manag Care. 2006;12(7):389-395.

24. Fischer SH, Tjia J, Field TS. Impact of health information technology interventions to improve medication laboratory monitoring for ambulatory patients: a systematic review. J Am Med Inform Assoc. 2010;17 (6):631-636.

25. Feldstein AC, Smith DH, Perrin N, et al. Improved therapeutic monitoring with several interventions: a randomized trial. Arch Intern Med. 2006;166(17):1848-1854.

26. Hoch I, Heymann AD, Kurman I, Valinsky LJ, Chodick G, Shalev V. Countrywide computer alerts to community physicians improve potassium testing in patients receiving diuretics. J Am Med Inform Assoc. 2003; 10(6):541-546.

27. Raebel MA, Carroll NM, Andrade SE, et al. Monitoring of drugs with a narrow therapeutic range in ambulatory care. Am J Manag Care. 2006; 12(5):268-274.

28. Matheny ME, Sequist TD, Seger AC, et al. A randomized trial of electronic clinical reminders to improve medication laboratory monitoring. J Am Med Inform Assoc. 2008;15(4):424-429.

29. Lo HG, Matheny ME, Seger DL, Bates DW, Gandhi TK. Impact of non-interruptive medication laboratory monitoring alerts in ambulatory care. J Am Med Inform Assoc. 2009;16(1):66-71.

30. Steele AW, Eisert S, Witter J, et al. The effect of automated alerts on provider ordering behavior in an outpatient setting. PLoS Med. 2005; 2(9):e255.

Related Videos
dr carol regueiro
dr carol regueiro
dr carol regueiro
Corey McEwen, PharmD, MS
dr linda bosserman
dr andrew leitner
dr joseph alvarnas
Screenshot of an interview with A. Mark Fendrick, MD
Screenshot of Scott Soefje, PharmD, BCOP, during  a video interview with. the AJMCtv logo in the top corner
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo