Publication

Article

Supplements and Featured Publications

Special Issue: Health Information Technology — Guest Editors: Sachin H. Jain, MD, MBA; and David B
Volume16

Alternative Measures of Electronic Health Record Adoption Among Hospitals

This study analyzes various pathways toward hospital adoption of electronic health records and explores relationships among various electronic health record function variables.

Objective:

To develop measures of the use of electronic health records (EHRs) that accurately reflect the full continuum of hospital adoption and progress toward meaningful use and to understand the intercorrelations and patterns associated with hospital adoption of specific EHR functions.

Study Design:

This study analyzed the 2009 American Hospital Association (AHA) information technology (IT) supplement survey. The main section of this survey assessed the adoption and use of 24 EHR functionalities in the following major categories: electronic clinical documentation, results viewing, computerized provider order entry, and clinical decision support.

Methods:

This study relied on descriptive statistical methods and a principal component factor analysis.

Results:

We found that 11.4% of hospitals met all and 48.3% met half or more of the core criteria that are included in both the AHA IT survey and the final “meaningful-use” rule. The results from our factor analysis imply that hospitals adopt groups of similar EHR functions, but choices to adopt across major categories are relatively independent.

Conclusions:

Many hospitals have adopted multiple features of EHRs and tend to use a staged adoption strategy based on logical groupings of functions. These results indicate to policymakers that there is no single path toward adoption of advanced EHR systems.

(Am J Manag Care. 2010;16(12 Spec No.):e293-e301)

This study provides insight into how to accurately measure, track, and understand hospital adoption of electronic health records (EHRs).

  • This is the first study to use a factor analysis to measure the intercorrelations of EHR functions.

  • This study expands the literature on measuring EHR adoption by analyzing continuous measures of comprehensive systems and progress toward meaningful use.

  • Policymakers can use this information to measure the effectiveness of incentive payments and government programs and to help develop legislation for stage II and stage III meaningful use.

The Health Information Technology for Economic and Clinical Health Act (HITECH Act), part of the American Recovery and Reinvestment Act of 2009, was designed to improve the quality and efficiency of the healthcare system through the adoption and use of electronic health records.1 The stimulus funds allocated in the act will allow the Centers for Medicare & Medicaid Services (CMS) to award incentive payments to providers who achieve “meaningful use” of certified electronic health record (EHR) systems. The recently released definition of meaningful use created a core set of 15 objectives and a menu of 10 objectives from which providers can choose 5 to implement.2 These criteria were selected to ensure that hospitals and clinical professionals use health information technology (IT) in ways that will improve patient care. Stimulus funds also were allocated to a set of programs designed to establish the nationwide infrastructure needed to exchange health information and to assist smaller practices and critical access hospitals in adopting and using EHRs.

Policymakers, researchers, providers, and the health IT industry alike are now even more interested in accurately tracking hospital adoption and use of EHRs. From a policy perspective, it is important to measure changes in EHR adoption that occur in response to CMS’s incentive payments and the outreach programs administered by the Office of the National Coordinator for Health Information Technology (ONC). Similarly, both government and industry are interested in tracking the adoption and use of EHRs to determine which hospitals are lagging behind. This information will enable ONC to track progress toward HITECH’s goals, provide data to outreach programs, and target limited resources toward high-value areas (eg, hospitals where it is cost-effective to install or upgrade an EHR system). These data also can help vendors and health IT consultants target areas where there is potential demand for new EHR systems or upgrades to existing systems. Finally, tracking adoption of specific EHR functions and determining which functions are the most difficult or most simple to achieve can inform the subsequent versions of the criteria for meaningful use. The stage II meaningful-use criteria will be finalized in 2012 and will focus on disease management, clinical decision support, medication management, support for patient access to their health information, transitions in care, quality measurement and research, and bidirectional communication with public health agencies.3 The stage III criteria will be finalized in 2014 and will focus on achieving improvements in quality, safety, and efficiency.3

Currently, ONC and the research community rely on data from the American Hospital Association (AHA) IT supplement survey to define 2 levels of adoption. Hospitals must have 24 EHR functions present in all units to be defined as having a comprehensive EHR system and each of 10 functions present in at least 1 unit to be defined as having a basic EHR system.4 These definitions were created by an expert panel and the results from the 2008 AHA IT supplement survey were published in a widely cited article by Jha et al.4 Their results showed that 1.5% of acute-care nonfederal hospitals had a comprehensive EHR system and 7.6% had a basic EHR system in 2007. Estimates from other data sources, such as the Healthcare Information and Management Systems Society, vary depending on sampling strategy, sample size, and functionalities used to define an EHR system.5-8 For example, Furukawa et al found that hospitals had adopted only 2.2 of 8 EHR applications and found wide variation in adoption across technologies, hospital characteristics, and geographic locations.5 However, given the volume of hospital EHR functions and the potential complexities associated with surveyinstruments, all of these empirical measures are challenged to accurately reflect current levels of adoption.

This study examines a set of measures that characterize hospitals along a continuum of adoption ranging from no system to a basic system to a system that is either comprehensive or meets a set of criteria that were included in the definition of meaningful use. We used the 2009 AHA IT supplement survey to develop continuous scales of EHR adoption and measure hospital progress toward meaningful use. Overall, we found that 9.8% of all hospitals have fully implemented 20 or more of the functions included in the definition of a comprehensive EHR system. In addition, 11.4% of hospitals met all and 48.3% met half or more of the core meaningful-use criteria that are available on the AHA IT supplement survey. This result differs from that in a recent study by Jha et al,9 which found that only 2% of hospitals have EHRs that meet all of the meaningful-use criteria on the AHA survey. The results vary due to different interpretations of the final meaningful-use rule, which is further discussed in the Methods section. Our estimates provide a more optimistic picture of the EHR adoption landscape (relative to dichotomous measures of adoption) and can be used by policymakers to help forecast the number of hospitals that could be early meaningful users of EHRs.

To further understand the underlying structure of the AHA data, we also used a factor analysis to explore the intercorrelations among the EHR function variables on the AHA survey. This analytic strategy examined the extent to which hospitals implementing 1 function might also have implemented other specific functions. The results from the factor analysis imply that adoption of specific functions within major EHR categories (electronic clinical documentation, results viewing, computerized provider order entry (CPOE), and clinincal decision support) are highly correlated, but adoption across major categories is relatively independent.

Data

The AHA annual survey samples more than 6500 hospitals throughout the United States and is primarily completed online.10 ONC started funding the annual IT supplements to the core AHA survey in 2008. Like the core AHA survey, the IT supplement is sent to a hospital’s chief executive officer with instructions to assign the survey to the most knowledgeable person in the institution for completion (eg, the chief information officer or equivalent).4 The sample for this analysis was limited to the hospitals that responded to the AHA IT supplement survey (n = 3937 hospitals). Comparisons of the respondents and nonrespondents found only modest differences between the groups.9

Appendix A

The first question on the 2009 IT supplement assessed the adoption and use of 24 EHR functionalities in the following major categories: electronic clinical documentation, results viewing, CPOE, and clinical decision support. Each hospital also indicates whether these functions are fully implemented in every unit of the hospital, fully implemented in at least 1 unit of the hospital, or had not yet been fully implemented in any unit of the hospital (with options ranging from not considering implementation to beginning to implement). shows the full layout of this question.

METHODS

We created 2 continuous measures of hospital adoption. The first measure captures the number of fully implemented functions (across all units) that are part of the definition of a comprehensive system, as displayed in Appendix A. This variable ranged from 0 to 24, where hospitals with a score of 24 had a comprehensive EHR system based on the dichotomous definition of adoption and those with a score of 0 had no functions. The second measure also ranged from 0 to 24, but each function was defined as being implemented in at least 1 unit or across all units. The latter is a more loosely defined definition of adoption, but it incorporates hospitals that have started implementation of EHRs in some, but not necessarily all, clinical units.

We also constructed a continuous variable that measures hospitals’ progress toward meaningful use. The AHA IT supplement survey contains 8 of the final core meaningful-use criteria: patient demographics, patient problem lists, patient medication lists, discharge summaries, CPOE for medications, drug-allergy alerts, drug—drug interaction alerts, and any 1 clinical decision support rule (in addition to drug-allergy and drug–drug interaction alerts). To be conservative, we assumed that hospitals must implement these functions across all units to meet the meaningful-use standards. We did not include any of the menu set functions that are included in the AHA survey. The single exception is CPOE for medications, where the final rule states that “30% of patients with at least one medication in their medication list have at least one medication ordered through CPOE.”2 As such, we assumed that hospitals only need to implement CPOE in at least 1 unit to meet this meaningful-use standard. Overall, our interpretation of these functions is consistent with the criteria used by Jha et al.9 However, their interpretation of drug—drug interaction and drug-allergy checks is more restrictive. In the Jha et al article, hospital systems must have CPOE for medications implemented in all clinical units in order to satisfy the drug–drug interaction and drug-allergy checks criteria for meaningful use. We defined implementation of these functions independently.

Although the final meaningful-use rule states that hospitals must perform “at least one test of an EHR’s capacity to electronically exchange information,”2 we did not include this criterion in our measure. The AHA IT supplement survey asks if hospitals actively exchange patient data with other providers. However, because actual exchange is not part of the final rule, we did not include the exchange variable in our proxy measure for meaningful use. Similarly, the final rule states that hospitals must report numerator and denominator clinical quality measures through attestation to CMS or states. The AHA IT supplement survey contains a question related to quality measures (“Does your hospital’s electronic system allow you to automatically generate Hospital Quality Alliance measures by extracting data from an electronic record for a Medicare inpatient prospective payment system update?”), but we did not include it in our proxy measure of meaningful use because it does not specify numerator and denominator attestation. These functions are optional choices, and we do not know if hospitals will actually choose to implement them. In contrast, Jha et al included both exchange and hospital reporting in their criteria for meaningful use.9

We also analyzed how adoption varied across major categories (electronic clinical documentation, results viewing, CPOE, and clinincal decision support) and used principal component factor analysis to explore the underlying structure of the EHR functionality variables. The purpose of this principal component factor analysis is to understand the correlations among the AHA functionality variables and to create a smaller number of unobservable principal components. Principal component factor analysis computes linear transformations that map the data from a high dimensional space (eg, 24 functionality variables) to a lower dimensional space (eg, 4 factor variables), while preserving as much information as possible by minimizing the mean square error. To facilitate interpretation of the factors, we used an orthogonal varimax rotation method. We retained only factors with eigenvalues greater than 1, which is consistent with most of the literature related to this methodology.

RESULTS

Table 1

shows that 3.6% of hospitals have all 24 EHR functions listed on the AHA survey fully implemented across all units (2.7% when the sample is limited to acute-care nonfederal hospitals as defined by Jha et al9). However, the table also shows that 9.8% of hospitals have at least 20 functions, and 36.5% have at least half of the functions fully implemented across all units. These levels of hospital adoption are not fully captured by the dichotomous definition that is widely used in the current literature. Table 1 also shows 14.9% of hospitals have 0 functions implemented across all units and 46.5% have 7 or fewer functions implemented across all units.

The percentage of adopters markedly increased when implementation is defined as occurring in at least 1 unit as opposed to all units. Table 1 shows that 6.6% of hospitals have all 24 functions and 54.7% have at least half of the functions fully implemented in at least 1 unit. Only 8.1% of hospitals failed to implement any functions across any units.

Table 2

illustrates hospital implementation of specific functions that are part of the core meaningful-use criteria. Approximately 75% of hospitals met the criteria for patient demographics, compared with roughly 30% that met the CPOE for medications criteria and 30% that met the problem lists cri teria. Approximately 40% to 45% of hospitals met each of the remaining criteria that are part of the final meaningful-use rule.

Table 3

shows that 11.4% of hospitals implemented all 8 meaningful-use functions included on the survey, and nearly half of hospitals implemented 4 or more. Among hospitals that implemented 4 or more functions, the vast majority (> 80%) implemented patient demographics, medication lists, drug-allergy alerts, drug—drug interaction alerts, and clinical decision support rules. In contrast, implementation of patient problem lists (58%), discharge summaries (76%), and CPOE for medications (50%) lagged behind implementation of the other functions. Finally, approximately 19% of hospitals met none of the meaningful-use criteria that are part of the AHA IT supplement survey list.

Table 4

Appendix B

shows the results from the principal component factor analysis. The factor analysis used 24 dichotomous variables indicating whether or not the hospital fully implemented the functionality across all units. The eigenvalues are above 1 for the first 4 factors and drop off below 1 for additional factors, implying that a 4-factor solution provides the best representation of the data. Overall, these 4 factors account for more than 70% of the variation in the data and the first factor alone accounts for approximately 20% (data not shown in tables). However, given the potential biases associated with conducting factor analysis with dichotomous data,11 we also ran a similar model using the 24 “raw” function variables with values ranging from 1 to 6 (results shown in ). These results are consistent with the results in Table 4.

Table 5

The factor loadings in Table 4 and Appendix B represent the correlations between each factor and each individual variable on the AHA survey. These results imply that the 4 main factors are primarily defined by the main categories of adoption (electronic clinical documentation, results viewing, CPOE, and clinical decision support) and that there are strong correlations within each category of adoption, but not across categories of adoption. For example, in both models, the correlations between each of the CPOE variables and factor 1 are greater than 0.8. This result is very intuitive. We would expect hospitals that implement CPOE for lab tests (across all units) also to implement CPOE for radiology tests. However, we found a lack of correlation across functionaln ity types. For each model, the correlations between thenon-CPOE variables and factor 1 ranged from 0.1 to 0.46. These results are reinforced by the cross-tabulations in . Table 5 shows that among hospitals that fully implemented all of the CPOE functions, only 45.6% implemented all of the clinical documentation functions, 61.3% implemented the results viewing functions, and 50.4% implemented the clinical decision support functions. Similar trends exist for the other categories of adoption.

DISCUSSION

This study demonstrates that EHR adoption is a complex process. Fewer than 5% of hospitals had a comprehensive EHR system based on the criterion of having 24 separate functions fully implemented across all units. However, nearly 10% of hospitals had 20 of these functions implemented across all units. Similarly, 12% of hospitals implemented all 8 functions and 40% fully implemented at least 5 functions that are part of the final meaningful-use criteria. Even though only half of the core final meaningful-use criteria are represented on the AHA survey, these 8 functions represent 3 of the 4 major categories of adoption and provide some insight into progress toward achieving meaningful-use. The data also highlight which functions (eg, patient problem lists, CPOE for medications) hospitals have struggled to implement so far relative to the other meaningful-use criteria.

The second major result is that adoption of specific functions within major EHR categories is highly correlated, but adoption across categories is largely independent. The results from the factor analysis can be partially attributed to the survey layout, which clusters functionalities under specific major categories. However, the results also suggest that hospitals use a staged adoption strategy based on logical functional clusters. This finding, combined with the findings described above about the proportions of hospitals meeting the meaningful-use criteria on the survey, indicates that many hospitals may indeed have already adopted sets of functions on the meaningful-use list beyond the 8 we observed.

Policymakers should be aware that there is no single path to adoption and that hospitals are advancing their EHR systems in nonuniform ways. Future legislation should take into account that some hospitals might be advanced in 1 area of adoption (eg, results viewing) but lag behind in other areas (eg, CPOE). For instance, the results in Table 5 show that 28.6% of hospitals have fully implemented all of the result viewing functions on the AHA IT supplement survey. However, among those 28.6% of hospitals that have fully implemented viewing systems, only 27% have fully implemented electronic clinical documentation systems, 28% have fully implemented CPOE systems, and 30% have fully implemented clinical decision support systems.

The stage I meaningful-use criteria reflect these differences through the use of the core and menu criteria sets. The core and menu meaningful-use objectives allow hospitals to continue to make independent decisions surrounding their IT systems, while providing incentives and support programs to accelerate the speed and ease with which they make them. Using the methods described in this study, we will be able to assess hospitals’ progress toward meaningful use and the rates of adoption of key functionalities in the core set, menu set, and beyond as they evolve.

Author Affiliations: From the Office of the National Coordinator (FEB, MJBB, CPF), US Department of Health and Human Services, Washington, DC.

Funding Source: The authors report no external funding for this work.

Author Disclosures: The authors (FEB, MJBB, CPF) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (FEB, MJBB, CPF); acquisition of data (FEB); analysis and interpretation of data (FEB, MJBB, CPF); drafting of the manuscript (FEB, MJBB, CPF); critical revision of the manuscript for important intellectual content (FEB, MJBB, CPF); statistical analysis (FEB, MJBB); administrative, technical, or logistic support (FEB, MJBB, CPF); and supervision (MJBB).

Address correspondence to: Fredric E. Blavin, MS, Office of the National Coordinator, US Department of Health and Human Services, 200 Independence Ave SW, Washington, DC 20201. E-mail: fblavin@gmail.com.

1. Blumenthal D. Launching HITECH. N Engl J Med. 2010;362(5):382-385.

2. Blumenthal D, Tavenner M. The "meaningful use" regulation for electronic health records. N Engl J Med. 2010;363(6):501-504.

3. Centers for Medicare & Medicaid Services. Details for: CMS proposes definition of meaningful use of certified electronic health records (EHR) technology. Fact sheets. December 30, 2009. http://www. cms.gov/apps/media/press/factsheet.asp?Counter=3564. Accessed August 2, 2010.

4. Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009;360(16):1628-1638.

5. Furukawa MF, Raghu TS, Spaulding TJ, Vinze A. Adoption of health information technology for medication safety in U.S. hospitals, 2006. Health Aff (Millwood). 2008;27(3):865-875.

6. Cutler DM, Feldman NE, Horwitz JR. U.S. adoption of computerized physician order entry systems. Health Aff (Millwood). 2005;24(6):1654-1663.

7. Healthcare Information and Management Systems Society (HIMSS). 2002 Hot Topic Survey. Chicago, IL: HIMSS Analytics; 2002.

8. Jha AK, Doolan D, Grandt D, Scott T, Bates DW. The use of health information technology in seven nations. Int J Health Inform. 2008;77(12):848-854.

9. Jha AK, DesRoches CM, Kralovec P, Joshi M. A progress report on electronic health records in U.S. hospitals. Health Aff (Millwood). 2010;29(10):1-7.

10. American Hospital Association. Survey history & methodology. http://www.ahadata.com/ahadata/html/historymethodology.html. Accessed August 2, 2010.

11. Shapiro SE, Lasarev MR, McCauley L. Factor analysis of Gulf War illness: what does it add to our understanding of possible health effects of deployment? Am J Epidemiol. 2002;156(6):578-585.

AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo