Publication
Article
Author(s):
Policy makers should not expect public sector electronic medical record investments to yield substantial short-term improvements in publicly reported measures.
Objectives:
To measure the effect of electronic medical records (EMRs) on a publicly reported composite measure indicating optimal diabetes care (ODC) rates in ambulatory settings.
Study Design:
Data from Minnesota Community Measurement on 557 clinics were used, including information on ODC, EMR adoption, and clinic characteristics.
Methods:
A difference-in-differences strategy was used to estimate the impact of EMR adoption on patient outcomes while controlling for observed and unobserved clinic characteristics. Results were compared with a cross-sectional analysis of the same data.
Results:
EMRs had no observable effect on ODC for the average clinic during the first 2 years postadoption. EMRs may, however, generate modest (+4 percentage point) ODC increases for clinics in large, multisite practices. Cross-sectional analysis likely overestimates the effect of EMRs on quality.
Conclusions: There is little evidence that EMR adoption improves diabetes care during the first 2 years postadoption. This is notable as diabetes is a condition for which information technology has the potential to improve care management. The results suggest that policy makers should not expect public sector EMR investments to yield significant short-term improvements in publicly reported measures.
(Am J Manag Care. 2013;19(2):144-149)Data from Minnesota Community Measurement on 557 clinics were used to estimate the effect of electronic medical record (EMR) adoption on measures of diabetes care quality in the initial postadoption years.
In 2009, Congress passed the Health Information Technology for Economic and Clinical Health (HITECH) Act, authorizing an estimated $30 billion in payments to healthcare organizations that purchase and implement electronic medical records (EMRs).1 This unprecedented action was based on the observation that the healthcare sector lagged in the adoption of information technology (IT), along with the expectation that EMRs would improve quality while reducing costs. With respect to quality improvement, it was thought that EMRs would improve care coordination, promote treatment guideline adherence, and simplify tracking of treatments and outcomes, reducing patients’ exposure to risk and unnecessary care.
Recent literature reviews generally have found evidence to support the beneficial effects of EMRs. However, the majority of studies have been carried out in inpatient settings, and studies often focused on a small set of technical functionalities.2 Evidence regarding the impact of EMRs implemented in physician practices is less extensive. There have been reports of strong favorable impacts of EMRs on quality,3,4 but some studies and experts argue that benefits are small and disputable.5-7 In particular, a recent large-scale analysis of physician survey data8 reported a positive relationship between EMRs and only 1 of 20 quality indicators. These authors state that their findings “raise concerns about the ability of health information technology to fundamentally alter outpatient care quality.” A recent study using medical-records data from a single community found that EMR use was correlated with large diabetes outcome improvements.9 However, the authors of this study also recognize that their cross-sectional empirical strategy may be subject to selection bias.
We add to the relatively limited number of quantitative analyses that address the relationship between EMRs and ambulatory care quality. Because past studies typically have used cross-sectional data and/ or have not controlled for unobserved differences between clinics that adopted EMRs and those that have not (selection effects), their ability to draw inferences about the relationship between EMR adoption and publicly reported quality measures has been limited. In this article, we examine whether changes in physician practice quality measures are linked to EMR adoption using data from public reports of diabetes care. We contrast these results with findings from cross-sectional analyses of the same data. To help ensure the robustness of our approach, we also explore whether EMR adoption leads to changes in the measurement of quality metrics by examining missing data rates.
MethodsData Sources
We used data on the quality of diabetes care provided by physician clinics in Minnesota from 2008 to 2010. These data were publicly reported by Minnesota Community Measurement (MNCM), a collaboration among a wide range of community stakeholders.10
Diabetes care is an appropriate focus for the analysis as there are widely accepted treatment guidelines that can be incorporated in EMRs. Treatment requires coordination of tests, prescriptions, and patient behavior, as well as management across time. EMRs should facilitate each of these tasks.
Data are reported annually by clinics on a voluntary basis through a process called MNCM Direct Data Submission. Required data elements are assembled from medical records by clinic abstractors. After completing quality checks and addressing Health Insurance Portability and Accountability Act requirements, clinics submit data to MNCM, which conducts quality checks and performs on-site audits to ensure data quality.11
Optimal diabetes care (ODC) scores were calculated for each submitting clinic. These scores measure the percentage of patients with diabetes (type I and type II) aged 18 to 75 years who reach 5 treatment goals: (1) glycated hemoglobin (A1C) less than 8%; (2) blood pressure less than 130/80 mm Hg; 3) low-density lipoprotein cholesterol less than 100 mg/ dL; (4) daily aspirin use unless contraindicated (ages 41-75 years only); and (5) documented tobacco-free status. MNCM altered the goal for A1C control in 2010, using <8% to replace the prior standard of <7%.11
Study Population
Table 1
The study population was 557 clinics in Minnesota and neighboring states. These clinics included both stand-alone facilities and those that were members of multiclinic group practices. The number of clinics rose from 309 to 527 during the study period, while the number of groups grew from 58 to 123. Note that because a small number of clinics discontinued reporting, the total number of clinics exceeded the maximum number of clinics per year. Consequently, our data form an unbalanced panel with the number of clinics growing overtime. Average sample characteristics from 2008 to 2010 are shown in .
Study Variables
The MNCM data documented clinic-level performance for ODC and its 5 component measures (Table 1), as well as the rates at which component measures were missing. These missing data rates reflect the clinic’s ability to document and report clinical information.
MNCM tracked whether each clinic’s data were drawn from a paper-based system (no EMR), a hybrid of paper and electronic documentation (partial EMR), or an entirely electronic system (EMR). An important limitation of these data is that they did not allow us to capture the systems’ functional capabilities, such as decision-support systems; however, they did allow us to measure an average effect of EMR adoption. EMR utilization rates were quite high in the study clinics; 54% of clinics used EMR systems, an additional 24% used partial EMR systems, and only 22% used no EMR. These rates are substantially higher than the national average.12,13 We observed 124 adoption events in our sample.
The data set contained additional information regarding the clinics and their patient populations. On average, clinics treated 427 diabetic patients and submitted a sample of 273 records. Most clinics were members of larger group practices composed of, on average, 15 clinics (Table 1). Clinic characteristics were notably different across EMR adoption levels. For instance, clinics with EMRs had a 25% larger diabetic patient population than clinics with no EMRs.
Table 2
As a first step in our analysis, we used linear regression to measure average quality differences between clinics with no EMRs, partial EMRs, and EMRs (, model 1). To account for potential selection effects, we used a differencein- differences strategy. We implemented this strategy using linear regression with clinic and time-fixed effects, using xtreq commands in Stata version 11 (StataCorp, College Station, Texas). We also included observable, time-varying, clinic controls such as the number of diabetes patients, the number of clinics within the group, and whether data were drawn from a sample or a census (model 3). Standard errors were corrected to reflect the fact we used multiple observations for each clinic.14
Results
During 2010, the average clinic with no EMR achieved ODC for 17% of patients, and clinics with partial EMR utilization achieved ODC for 18%. In contrast, clinics with EMR achieved ODC for 26% of patients.
Although these results were statistically significant, there were differences in the characteristics of clinics with and without EMRs. Clinics that had adopted EMRs prior to 2008 were members of larger group practices and treated more diabetes patients per clinic than clinics that eventually adopted EMRs. Similarly, clinics that eventually adopted EMRs were larger than those that never adopted EMRs. When we incorporated these clinic characteristics into the analyses (model 2), partial EMR use became uncorrelated with ODC. Also, the effect of EMR utilization was substantially dampened, but remained statistically significant. After incorporation of clinic characteristics, EMR utilization was associated with only a 3 percentage point difference in ODC compared with clinics with no EMR.
This finding does not necessarily mean that having EMRs is the determining factor in achieving higher quality. Higher-quality clinics might, for example, be more likely to adopt EMRs. Conversely, lower-performing clinics might adopt EMRs to address quality problems. We examined the differences in individual clinic quality before and after EMR adoption relative to quality differences in clinics that did not change their EMR systems over the same period. If unobserved aspects of clinic quality (eg, clinic culture) were relatively stanble across time, then taking differences eliminates this bias. Similarly, analyzing differences across adopting and nonadopting clinics eliminated bias from changes across time in factors that were common to clinics (eg, changes in reporting rules).
Using this approach, we found no statistically significant relationship between EMR use and ODC rates. The lack of a significant relationship between EMRs and rates of ODC was not likely due to imprecise measurement. The parameter was nearly zero (-0.004) and the estimate was fairly precise; we rejected a 2.4 percentage point improvement with 95% certainty. These findings suggest that, although EMRs and ODC are correlated, having an EMR was not a significant factor leading to higher ODC during the time period covered in our data set. Instead, the findings imply that clinics adopting EMRs performed better on MNCM metrics before EMR adoption. There are a number of potential explanations for our finding that EMR adoption did not improve ODC rates. First, ODC rates combine performance on 5 different measures, and the effect of EMRs could vary across measures. To address this possibility, we estimated separate models for each component measure. We found that EMR adoption did not improve any individual measure.
Second, EMR systems may change the way medical care is documented independent of their effect on actual care processes. Documentation changes could obscure the effect of EMRs on quality. We found no relationship between EMR adoption and missing data rates. (Results from the component quality and missing data analyses are available on request from the authors.)
Table 3
Third, the benefits of EMR adoption may not be realized immediately. Although we only observed 3 years of data, we tested for a 1-year lagged effect of EMRs on ODC controlling for clinic characteristics and allowing for unobserved clinic and time effects (model 4, ). We found no relationship between lagged EMR adoption and ODC. Naturally, a longer time series is needed to better test alternative lag structures.
Although the average EMR effect may be limited in the short run, benefits may be higher for clinics with greater technological sophistication or implementation. We examined whether EMR value was higher in larger practices and particularly in those with greater technology penetration across clinics. We tested for these relationships by estimating models where EMR was interacted with practice characteristics (Table 3, models 5 and 6). In model 5, we allowed the effect of EMRs to differ for small and large practices, with large practices defined as those with 10 or more clinics. Adoption of EMRs led to a 2 percentage point reduction in ODC for clinics in small practices, whereas EMR adoption was associated with a 0.5 percentage point increase in ODC for clinics in larger practices. Although the estimated impact was relatively small, these findings indicate that EMRs are more effective in improving ODC for clinics in large practices (ie, increases in benefits are scaled to practice size).
Electronic medical records may be more effective when implemented within practices with more sophisticated technology infrastructure and greater IT penetration. We estimated models that allowed the effect of EMR to depend on EMR utilization at other clinics within the same practice. Information technology penetration effects were identified by comparing changes in EMR value when other clinics in the same practice adopted EMRs. In nearly all specifications, EMRs continued to have no effect on ODC. As an example, model 6 shows that the main effect of EMRs on quality was nearly zero (-0.024); however, EMRs did have a positive effect (0.032) when interacted with the proportion of affiliated clinics using EMRs. However, these effects were small and not statistically significant. Models 4 through 6 suggest that the potential short-run EMR effect is modest in a wide range of clinic contexts.
DISCUSSIONLimitations
There are a number of limitations to our analysis. In particular, we observed the impact of EMR adoption over only 3 years. Therefore, we were able to observe only the near-term effects of EMR adoption. Furthermore, the MNCM data represent a self-selected sample of clinics that engage in public reporting and have higher EMR adoption rates than the national average, suggesting that caution is needed in generalizing to a broader population.
A further limitation is that our data did not capture the ways in which clinics and individual physicians use EMRs. Unobserved EMR sophistication and utilization are correlated with our EMR measures. If these unobserved features improve quality, then our analyses would have overstated the effect of transitioning from paper to EMRs. Given that our estimated parameters were near zero, this bias could not have driven our conclusions. EMR adoption may, however, be more valuable in some contexts. Clinics that use EMRs in different ways to support care may experience different effects of EMRs on quality. We addressed this issue in a limited way by examining partial and full EMR utilization, as well as lagged EMR utilization. In fact, our results suggest that modest benefits of EMRs may exist for clinics in large practices. Our general findings reflect the impact of “average” EMR use, not the potential gains from sophisticated, fully implemented EMRs. This approach is appropriate for assessing the potential impact of EMRs on quality in most clinics in the early years of adoption.
Finally, our study was limited by the focus on specific quality indicators related to 1 disease and reported by clinics in 1 geographic area. Findings for other diseases in other practice settings or using other quality indicators could differ. In particular, the quality indicators reported by MNCM could be categorized as intermediate outcome measures as opposed to process measures. EMRs could improve processes of care, but this improvement may not translate into improved intermediate outcomes. Nevertheless, improving patient outcomes arguably is an appropriate goal for EMR utilization.
Implications
We studied the relationship between ambulatory EMR adoption and patients’ diabetes outcomes using publicly reported clinic quality data. On average, clinics with EMRs achieved better diabetes outcomes than clinics using paperbased records. Clinics with EMRs achieved ODC for 25% of their patients, whereas other clinics achieved ODC for 16% of patients. However, these results do not appear to be causal. The relationship between EMRs and quality was entirely mitigated once we controlled for preadoption quality using panel data techniques. In effect, clinics that adopted EMRs achieved high levels of ODC before EMR adoption. We did, however, find evidence that EMR adoption leads to modest quality improvements for large multispecialty clinics.
Recent studies support our empirical results. In particular, Cebul et al9 studied the effect of EMR use on diabetes quality using similar data. They found that clinics with EMRs achieved 4 of 5 desired outcomes for 43% of patients, as opposed to 16% for clinics with paper-based records. The EMR differential fell to 15 percentage points when they controlled for observable clinic characteristics. Their data and methods were similar to ours, and their findings are analogous to our cross-sectional results. Cebul et al also recognized that these cross-sectional results “may be subject to selection bias.” Their findings provide evidence that EMR adoption is correlated with observable determinants of quality and validate our basic model. Herrin et al15 also found that EMR adoption improved diabetes care outcomes, although the effects largely occurred with patient-dependent rather than physician-dependent measures. This may be a consequence of EMRs affecting the way outcomes are measured. Our findings build on this result and provide strong evidence that EMR adoption is correlated with other, unobserved quality determinants. These results suggest caution in interpreting associations between EMRs and quality in cross-sectional studies; they are, however, consistent with findings on health IT in hospitals using panel data techniques.16-20
The HITECH Act provides incentives for the widespread adoption of EMR systems. Our results suggest that the shortrun benefits from ambulatory EMR adoption as measured by patient intermediate outcomes may be modest at best. These results are particularly notable as the clinics tracked by MNCM were relatively early adopters and may be even more sophisticated in their ability to use EMRs than the average clinic. Pay-for-performance programs with EMR adoption incentives may face similar expectations. EMRs may require a substantial period of time postadoption before contributing to improvement in publicly reported quality measures. Furthermore, if practices that are early adopters of EMRs are different in significant, but largely unobservable, ways from nonadopters or from later adopters, cross-sectional data related to their experience may overstate the potential contribution of EMRs to raising the level of quality overall. It will be important to revisit these findings as more practices adopt EMRs, as EMRs become more sophisticated, and as data from a longer time period become available.
Author Affiliations: From University of Minnesota (JSM, JC, BL), Minneapolis,MN.
Funding Source: This research was supported by a grant from the Robert Wood Johnson Foundation for the evaluation of its Aligning Forces for Quality Initiative.
Author Disclosures: The authors (JSM, JC, BL) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (JSM, JC); acquisition of data (JSM, JC); analysis and interpretation of data (JSM, JC, BL); drafting of the manuscript (JSM, JC, BL); critical revision of the manuscript for important intellectual content (JC, BL); statistical analysis (JSM, JC, BL); obtaining funding (JC); and supervision (JSM, JC).
Address correspondence to: Jeffrey S. McCullough, PhD, University of Minnesota, 420 Delaware Street SE, MMC 729, Minneapolis, MN 55455. Email: mccu0056@umn.edu.1. Buntin MB, Jain SH, Blumenthal D. Health information technology: laying the infrastructure for national health reform. Health Aff (Millwood). 2010;29(6):1214-1219.
2. Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff (Millwood). 2011;30(3): 464-471.
3. Robert Wood Johnson Foundation. Reform in Action: Does Use of EHRs Help Improve Quality? Insights from Cleveland. http://www.rwjf p.org/content/dam/web-assets/2011/06/does-use-of-ehrs-help-improvequality-.Published June 9, 2011. Accessed August 1, 2012.
4. Finkelstein J, Knight A, Marinopoulos S, et al. Enabling Patient-Centered Care Through Health Information Technology. Evidence Report No. 206. (Prepared by Johns Hopkins University Evidence-based Practice Center under Contract No. 290-2007-10061-I.) AHRQ Publication No. 12-E005-EF. http://effectivehealthcare.ahrq.go/index.cfm/searchfor-guides-reviews-and-reports/?productid=1158&pageaction=displayproduct. Published June 2012. Accessed August 1, 2012.
5. Keyhani S, Herbert PL, Ross JS, Federman A, Zhu CW, Siu AL. Electronic health record components and the quality of care. Med Care. 2008;46(12):1267-1272.
6. Black AD, Car J, Pagliari C, et al. The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Med. 2011;8(1): e1000387.
7. Saver B. The EHR has no clothes. Health Affairs Blog. http://healthaffairs.org/blog/2012/06/20/the-ehr-has-no-clothes/. Published June 20, 2012. Accessed August 1. 2012.
8. Romano MJ, Stafford RS. Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Arch Intern Med. 2011;171(10):897-903.
9. Cebul RD, Love TE, Jain AK, Hebert CJ. Electronic health records and quality of diabetes care. N Engl J Med. 2011;365(9):825-833.
10. Minnesota Community Measurement. Minnesota HealthScoresSM. http://www.mnhealthcare.org/. Published 2009. Accessed February 10, 2009.
11. Minnesota Community Measurement. Breaking New Ground: 2010 Health Care Quality Report. http://mncm.org/site/upload/files/HCQRFinal2010. pdf. Published December 2010. Accessed November 30, 2010.
12. Ford EW, Menachemi N, Peterson LT, Huerta TR. Resistance is futile: but it is slowing the pace of EHR adoption nonetheless. J Am Med Inform Assoc. 2009;16(3):274-281.
13. Hing E, Hsiao C-J. Electronic medical record use by office-based physicians and their practices: United States, 2007. Natl Health Stat Report. 2010;(23):1-11.
14. Bertrand M, Duflo E, Mullainathan S. How much should we trust difference-in-differences estimates? Q J Econ. 2004;119(1):249-275.
15. Herrin J, da Graca B, Nicewander D, et al. The effectiveness of implementing an electronic health record on diabetes care outcomes. Health Serv Res. 2012;47(4):1522-1540.
16. McCullough JS, Casey M, Moscovice I, Prasad S. The effect of health information technology on quality in U.S. hospitals. Health Aff (Millwood). 2010;29(4):647-654.
17. Jones SS, Adams JL, Schneider EC, Ringel JS, McGlynn EA. Electronic health record adoption and quality improvement in US hospitals. Am J Manag Care. 2010;16(12)(suppl HIT):SP64-SP71.
18. Agha L. The Effects of Health Information Technology on Costs and Quality of Medical Care. November 11, 2011. MIT Job Market Paper.http://economics.mit.edu/files/6216. Published November 11, 2011. Accessed January 17, 2012.
19. McCullough JS, Parente ST, Town RJ. Health Information Technology and Patient Outcomes: The Role of Organizational and Informational Complementarities. 2013. NBER Working Paper 18684.
20. Miller AR, Tucker C. Can health care information technology save babies? J Polit Econ. 2011;119(2):289-324.