Publication

Article

The American Journal of Managed Care

February 2009
Volume15
Issue 2

Reporting Hospitals" Antibiotic Timing in Pneumonia: Adverse Consequences for Patients?

Nationwide data on hospital emergency department visits reveal little evidence of unintended adverse consequences associated with publicly reporting hospitals’ antibiotic timing in pneumonia.

Objective

:

To determine whether publicly reporting hospital scores on antibiotic timing in pneumonia (percentage of patients with pneumonia receiving antibiotics within 4 hours) has led to unintended adverse consequences for patients.

Study Design:

Retrospective analyses of 13,042 emergency department (ED) visits by adult patients with respiratory symptoms in the National Hospital Ambulatory Medical Care Survey, 2001-2005.

Methods:

Rates of pneumonia diagnosis, antibiotic use, and waiting times to see a physician were compared before and after public reporting, using a nationally representative hospital sample. These outcomes also were compared between hospitals with different antibiotic timing scores.

Results:

There were no differences in rates of pneumonia diagnosis (10% vs 11% of all ED visits, P = .72) or antibiotic administration (34% vs 35%, P = .21) before and after antibiotic timing score reporting. Mean waiting times to be seen by a physician increased similarly for patients with and without respiratory symptoms (11-minute vs 6-minute increase, respectively; P = .29). After adjustment for confounders, hospitals with higher 2005 antibiotic timing scores had shorter mean waiting times for all patients, but there were no significant score-related trends for rates of pneumonia diagnosis or antibiotic use.

Conclusion:

Despite concerns, public reporting of hospital antibiotic timing scores has not led to increased pneumonia diagnosis, antibiotic use, or a change in patient prioritization. (Am J Manag Care. 2009;15(2):137--144)

Take-Away Points There has been concern that publicly reporting hospital scores on antibiotic timing in pneumonia (percentage of patients with pneumonia who received antibiotics within 4 hours) has led to unintended adverse consequences for patients.

  • A national sample of hospitals revealed little evidence that this public reporting has led to widespread overdiagnosis of pneumonia or inappropriate antibiotic administration.
  • Explainable variation in hospitals’ antibiotic timing scores is primarily attributable to differences in patients’ waiting times to see a physician, rather than differences in rates of pneumonia diagnosis or antibiotic administration.
  • Future monitoring of the effects of public reporting programs may provide valuable guidance to policy makers, especially in areas of controversy.

To encourage improvement in hospitals’ quality of care, the Hospital Quality Alliance (HQA) began an initiative to collect and publicly report hospital-level performance on 10 quality measures in 2004.1-3 More than 98% of US acute care hospitals supply performance data to the HQA,4,5 but concerns have been raised about potential unintended consequences of public reporting.6-8

Hospitals’ responses to the HQA measure “Initial Antibiotic Received within 4 Hours of Hospital Arrival” have been of particular concern. Hospitals feeling pressure to improve antibiotic timing performance could potentially “play for the test” by encouraging the premature (and potentially inaccurate) diagnosis of pneumonia, giving antibiotics indiscriminately to patients with respiratory symptoms, or inappropriately prioritizing patients likely to have pneumonia ahead of others whose medical conditions may be more urgent.9-14

Prior studies from single institutions and self-selected hospitals participating in a pay-for-performance pilot program suggest that incentives tied to pneumonia antibiotic timing scores have led to increased rates of inaccurate pneumonia diagnosis and inappropriate antibiotic administration in emergency departments (EDs).15-17 However, whether public reporting on antibiotic timing has had similar effects on a national scale is unknown.

Because antibiotic timing scores are thought to reflect care delivered in EDs,18 we used a national database of ED visits and compared the care for patients with respiratory symptoms before and after the start of public reporting. We assessed whether these patients were more likely to be diagnosed with pneumonia, to be prescribed antibiotics, and to have shorter waiting times to see a physician (compared with patients who did not have respiratory symptoms, reflecting patient prioritization). To test the hypothesis that hospitals with higher scores were “playing for the test,” we also assessed differences on these 3 measures between hospitals scoring higher and lower on the pneumonia antibiotic timing measure.

METHODS AND MATERIALS

Emergency Department Visit Data

We used patient visit data from the Emergency Department module of the nationally representative National Hospital Ambulatory Medical Care Survey (NHAMCS), which is administered annually by the National Center for Health Statistics (NCHS).19 A rotating panel of nonfederal, general, and short-stay hospitals participate in the NHAMCS, and visits from each hospital in the panel are sampled approximately every 15 months.

Trained hospital staff record patient and clinical data on standardized forms for each visit. Patient data include demographic information, expected source of payment, and nursing home residence. Clinical data include up to 3 reasons for visit, up to 3 physician diagnoses (in the ED), up to 8 medications administered during the ED visit (but not the timing of medication administration), waiting time to see a physician (2003-2005 only), triage vital signs, and orientation to person, place, and time.20

The NHAMCS Emergency Department module collected 182,332 patient visit records between 2001 and 2005. Among contacted hospitals over the study period, 90% to 95% participated. Visits are weighted to allow extrapolation of survey results to national estimates.19 For our analysis the NCHS created anonymous hospital identifiers that allowed longitudinal tracking of each participating hospital. The NCHS institutional review board approved all NHAMCS protocols, and the confidentiality of the data is protected by law.21

Hospital Antibiotic Timing Scores and Hospital Characteristics

We obtained publicly available hospital-level HQA performance data on the timing of initial antibiotics delivered to patients admitted with pneumonia for 2004 and 2005. Hospital scores were calculated as the percentage of adult patients discharged from the hospital with a diagnosis of pneumonia who received their first dose of antibiotics within 4 hours of hospital arrival. Detailed specifications for this measure are available elsewhere.22

We also linked hospitals’ pneumonia antibiotic timing scores to other hospital characteristics obtained from the 2005 database of the American Hospital Association: number of beds, geographic region, urban location, ownership (for-profit, not-for-profit, and government), status of membership in the Council of Teaching Hospitals, and percentage of patients covered by Medicare and Medicaid. Of the 507 unique hospitals participating in the NHAMCS Emergency Department module during 2001-2005, NCHS staff matched 503 (99%) to their HQA pneumonia antibiotic timing scores and other corresponding hospital characteristics.

Consistent with prior literature, we included for analysis only hospitals reporting stable antibiotic timing scores, defined as scores calculated using at least 25 patient discharges during the year 2005.2 Of the 503 NHAMCS sample hospitals, 118 (23%) were excluded based on this criterion.

Study Population

Our study population included ED visits during 2001-2005 by patients age 18 years and older whose primary reason for visit was “symptoms referable to the respiratory system” or “diseases of the respiratory system,” excluding conditions limited to the upper respiratory tract (eg, nasal congestion). Among included visits, the most common specific reasons for visit were cough (50%), shortness of breath (24%), and “labored or difficult breathing” (11%). In supplementary analyses, inclusion of visits for upper respiratory conditions did not substantively alter our results. Visits were included regardless of patients’ dispositions at the end of each ED visit (eg, admitted to hospital, transferred, discharged to home). Supplementary analyses limited to visits resulting in hospital admission did not substantively alter our results.

Outcome Variables: Processes of Emergency Department Care

We had 3 major outcome variables: ED diagnosis, antibiotic use, and waiting time to see a physician. We used International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes to classify ED diagnoses as pneumonia (with the same codes used for antibiotic timing score reporting),22 bronchitis, congestive heart failure, or other. It was possible for a single visit to carry more than 1 of these diagnoses, and we counted visits receiving diagnoses in more than 1 category toward the total in each applicable diagnostic category.

Antibiotic use was identified using the NCHS drug classification system.20,23 As in previous studies, we classified antibiotic use in visits for asthma and congestive heart failure as inappropriate when pneumonia was not also an ED diagnosis.16 In supplementary analyses we also included antibiotic use in bronchitis as inappropriate without substantive changes to the results.

Statistical Analysis

We performed 2 main comparisons. First, we analyzed nationwide longitudinal trends and differences in the outcome variables (ED diagnosis, antibiotic use, and waiting time to see a physician) before and after the start of public score reporting among ED visits for respiratory symptoms. We designated January 1, 2004, as the first day of the public reporting period because this was the first day of care that could contribute to publicly available antibiotic timing scores. Because the antibiotic timing measure was first published in October 2003,4 we designated October 1, 2003, as the first day of the reporting period in supplementary analyses with substantively similar results.

Second, we conducted a cross-sectional analysis of relationships between 2005 antibiotic timing scores and the outcome variables, restricting our analysis to ED visits in the public reporting period (2004-2005). We hypothesized that if hospitals were “playing for the test” in order to raise their scores, patients with respiratory symptoms visiting hospitals with the highest 2005 scores would experience the highest rates of pneumonia diagnosis, the highest rates of antibiotic use, and (relative to patients without respiratory symptoms) the shortest waiting times.

We assessed relationships between categorical variables using the χ2 test. Due to the nonnormal distribution of waiting time to see a physician, waiting times were modeled using generalized log-linear regression.

For the longitudinal analysis we constructed multivariable logistic regression models predicting each categorical outcome variable (ED diagnosis and antibiotic use) as a function of time period (prereporting vs during reporting) and the above-listed hospital-level characteristics from the American Hospital Association database. To adjust for case severity and patient characteristics known to be associated with ED processes of care,3,24,25 the models included patient age, sex, orientation to person/place/time (dichotomous), presence of fever (triage temperature >100.4°F), residence in a nursing home dichotomous), race and ethnicity (non-Hispanic white, non-Hispanic black, Hispanic, and other), anticipated source of payment (private, Medicare, Medicaid, or other), and season of the visit (winter, spring, summer, or fall). We constructed multivariable log-linear regression models for waiting time, including the same potential confounders. For the cross-sectional analysis we created similar models using hospitals’ 2005 antibiotic timing scores (continuous variable) rather than time period as the major predictor.

In order to assess the contribution of ED care processes to variation in antibiotic timing scores, we constructed supplementary linear regression models predicting the 2005 antibiotic timing score of the hospital associated with each ED visit as a function of pneumonia diagnosis, antibiotic use, and waiting time. We assessed the independent, mutually adjusted contribution of each explanatory variable to overall variation in scores by comparing the partial R2 values for each predictor. All analyses were performed using SUDAAN version 9.0.0 (Research Triangle Institute, Research Triangle Park, NC) to account for data clustering and the complex NHAMCS sampling design.20

RESULTS

There were 385 hospitals participating in the NHAMCS from 2001 to 2005 that also reported pneumonia antibiotic timing scores based on at least 25 observations in 2005. There were no significant differences between the NHAMCS hospital sample and the overall population of US hospitals reporting at least 25 antibiotic timing score observations in 2005 (Table 1).

Based on the NHAMCS sample, there were an estimated 40 million (95% confidence interval, 39 to 42 million) ED visits to hospitals by adults with respiratory symptoms between 2001 and 2005. These visits for respiratory symptoms represented 11% of all ED visits, and this percentage was constant across all data years. There were no significant differences between the prereporting (2001-2003) and reporting (2004-2005) periods in patient sociodemographic factors (age, sex, race, and ethnicity), nursing home residence, or expected payment source (data not shown). However, patients in the prereporting period were less likely to have a fever (8% prereporting vs 11% during reporting, P = .003) and to be oriented to person, place, and time (77% prereporting vs 85% during reporting, P < .001).

There was no significant difference in the rate of pneumonia diagnosis among ED visits for respiratory symptoms between the prereporting and during reporting periods (Table 2). Antibiotic use during ED visits increased among visits resulting in an ED diagnosis of pneumonia (70% prereporting vs 78% during reporting, P = .01), but there were no significant changes in antibiotic use among patients with other ED diagnoses. Mean waiting times to be seen by a physician increased similarly for patients with and without respiratory symptoms (11-minute vs 6-minute increase, respectively; P = .29). Adjustment for potential confounders including fever and orientation did not significantly alter

these relationships.

eAppendix Table

In our second analysis we examined cross-sectional relationships between hospitals’ 2005 pneumonia antibiotic timing scores and the outcome variables during public reporting. Pneumonia diagnosis rates did not significantly increase with antibiotic timing score (Table 3). (Adjusted diagnosis rates are given in the available at www.ajmc.com.) Patients visiting higher-scoring hospitals were more likely to receive an antibiotic (31% in the lowest score quintile vs 44% in the highest, P = .004 for trend) and had shorter mean waiting times regardless of respiratory symptoms (66 vs 38 minutes with respiratory symptoms, P <.001 for trend; 69 vs 38 minutes without respiratory symptoms, P <.001 for trend; P = .45 for trend in difference between symptom categories). After adjustment for confounders, only the relationship between shorter waiting times and higher-scoring hospitals remained a statistically significant trend.

Differences between hospitals in rates of pneumonia diagnosis, antibiotic prescribing, and waiting time explained only 4% of overall variation in 2005 antibiotic timing scores, suggesting a weak relationship between the measured ED processes of care and hospital antibiotic timing scores (data not shown). Of this explainable score variation, waiting times accounted for 79%, and antibiotic usage accounted for 3%.

DISCUSSION AND CONCLUSION

Despite fears that publicly reporting hospitals’ pneumonia antibiotic timing scores would lead to increased pneumonia diagnosis, indiscriminate antibiotic use, and inappropriate prioritization of patients with respiratory symptoms,6,9-11 we found little evidence of these unintended consequences in a nationally representative cohort of ED visits. Rates of pneumonia diagnosis and overall antibiotic use did not exhibit significant changes over time. Waiting times to see a physician increased similarly for patients with and without respiratory symptoms over 2003-2005, arguing against higher prioritization of patients likely to have pneumonia.

Moreover, cross-sectional analyses of ED visits during the public reporting period revealed that after adjustment for confounders, only waiting times differed significantly between hospitals with higher and lower antibiotic timing scores. Successful efforts to shorten waiting times for all patients would be better described as quality improvements than as adverse consequences.

Although hospitals in the highest antibiotic timing score quintile had the highest rates of antibiotic administration for inappropriate diagnoses, the lack of a statistical trend between antibiotic use and timing scores suggests that excessive antibiotic administration does not significantly contribute to the timing score ranking for most hospitals. Also, analysis of overall score variation revealed that only a very small percentage was attributable to differences in antibiotic administration rates. These findings are consistent with the overall stability of antibiotic administration rates before and after the start of public reporting.

However, if there are persistent concerns that hospitals seeking to achieve the highest antibiotic timing scores will, in the future, have increased rates of inappropriate antibiotic administration, then strategies focused on the top-scoring hospitals may be considered. For example, publicly reporting a score “band” rather than an exact score for hospitals scoring above a certain threshold could attenuate the incentive to achieve scores in the range generating these concerns.7,26

The implications of our longitudinal analysis differ from those suggested by some earlier reports. However, key differences in cohort design between our analysis (which included all patients presenting to EDs with respiratory complaints) and earlier single-institution studies (which included only those patients admitted with a pneumonia diagnosis) may not allow direct comparison of findings.15,27 A study among self-selected Premier client hospitals in the Hospital Quality Incentive Demonstration (HQID) revealed higher rates of antibiotic use for inappropriate diagnoses (heart failure, asthma, and chronic obstructive pulmonary disease) at hospitals with higher antibiotic timing scores.16 However, HQID hospitals faced financial incentives directly tied to antibiotic timing performance, and while all hospitals in our analysis publicly reported their antibiotic timing scores, no data were available to identify which hospitals in our analysis had antibiotic timing—based financial incentives. It is possible that compared with public reporting, financial incentives could have different effects on patient treatment patterns. Prior studies agreeing with our findings demonstrated associations between ED overcrowding (an established contributor to longer waiting times) and lower antibiotic timing scores.28-30

Our study has limitations. First, the NHAMCS does not assess the accuracy of ED diagnoses, so we cannot directly conclude that diagnostic accuracy was unaffected by public reporting. However, because rates of pneumonia diagnosis did not change, any increased inaccuracy would have had to split equally between errors of commission and mission. Second, we lacked complete clinical information (eg, presence or absence of infiltrate on chest X-ray, comorbid illnesses) to perform fuller case-mix adjustment. Third, our measure of patient prioritization (waiting times to see a physician) did not extend to other important processes of ED care, such as immediacy of imaging.18,31 Fourth, our analysis included ED visits by patients whose primary reason for visit was a symptom referable to the lower respiratory tract. It is possible that public reporting of antibiotic timing scores would have a different impact on patients with symptoms that were less specific for pneumonia (eg, delirium, fever, chest pain). Finally, absence of proof is not proof of absence. Failure to detect significant unintended consequences of public antibiotic timing score reporting could be due to insufficient statistical power. However, observed rates of pneumonia diagnosis and antibiotic use were remarkably stable over time, with waiting times for all patients (regardless of respiratory symptoms) accounting for the majority of explainable between-hospital variation in antibiotic timing scores.

External incentive programs designed to encourage healthcare quality improvement are becoming increasingly common, and concerns about unintended consequences of these programs have surfaced in a variety of clinical settings.32-35 Many of these concerns focus on providers “playing for the test” or “gaming the system.”6,10,36 In response to concerns that the pneumonia antibiotic timing measure had led to adverse unintended consequences, the Joint Commission and the National Quality Forum changed the cutoff for timely initial antibiotic in pneumonia from 4 hours to 6 hours after hospital arrival and excluded cases of “diagnostic uncertainty” from score calculation.37 The Infectious Diseases Society of America eliminated the time cutoff altogether, recommending only that initial antibiotics be received in the ED.38 To the extent that the outcomes we examined reflect possible adverse unintended consequences, our results do not support these changes.

In summary, we found that during the first 2 years of public reporting of hospital pneumonia antibiotic timing scores, concerns about potential widespread unintended consequences were not substantiated by the national experience. Patterns of ED pneumonia diagnosis and antibiotic use among patients with respiratory symptoms have remained stable over time. The EDs of hospitals with higher antibiotic timing scores distinguish themselves by having shorter waiting times to see a physician, suggesting that these scores communicate information of real importance to patients and payers. However, providers’ concerns about the potential adverse consequences of public reporting and pay-for-performance programs deserve attention.7,8 Monitoring systems that target these concerns and prospectively measure the patient-level effects of quality improvement initiatives may provide valuable guidance (and reassurance) to policy makers.

Acknowledgments We thank Robert Krasowski, MA, MS, Susan Schappert, MA, and Peter Meyer, MA, MPH, at the National Center for Health Statistics for invaluable assistance in database linkage, management, and storage. We thank Stuart Lipsitz, ScD, Division of General Medicine, Brigham and Women’s Hospital, for statistical consultation. We thank Ashish Jha, MD, MPH, of the Division of General Medicine, Brigham and Women’s Hospital, and Dale Bratzler, DO, MPH, of the Oklahoma Foundation for Medical Quality for helpful comments on earlier drafts of this manuscript. Author Affiliations: From the Division of General Medicine (MWF, JAL), Brigham and Womens’ Hospital, Boston, MA; the Department of Medicine (JAL) Harvard University, Boston, MA; the Division of General Internal Medicine (AM), University of Pittsburgh; and RAND Health (AM), Pittsburgh, PA. Funding Source: The study was supported by the Brigham and Women’s Hospital Primary Care Teaching and Education Fund. Dr Friedberg was supported by a National Research Service Award from the Health Resources and Services Administration (5 T32 HP1100120). Dr Mehrotra was supported by a Career Development Award (KL2 RR-024154-03) from the National Center for Research Resources (NCRR), a component of the National Institutes of Health. Dr Linder was supported by a Career Development Award (K08 HS014563) from the Agency for Healthcare Research and Quality. No funding source had a role in the study design, analysis, or manuscript preparation. Author Disclosure: Dr Linder reports having received grant support from Roche Pharmaceuticals and Pfizer, Inc. The other authors (MWF, AM) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article. Dr Friedberg had full access to all of the de-identified study data, which are stored in secure facilities at the National Center for Health Statistics. Dr Friedberg performed and takes responsibility for the accuracy of the data analysis. Partial results from this work were presented at the 2007 AcademyHealth Annual Research Meeting, June 4, 2007, Orlando, FL. Authorship Information: Concept and design (MWF, AM, JAL); acquisition of data (MWF, JAL); analysis and interpretation of data (MWF, AM, JAL); drafting of the manuscript (MWF, JAL); critical revision of the manuscript for important intellectual content (AM, JAL); statistical analysis (MWF, JAL); obtaining funding (MWF, JAL); and supervision (JAL). Address correspondence to: Jeffrey A. Linder, MD, MPH, Division of General Medicine, Brigham and Women’s Hospital, 1620 Tremont St, BC-3-2X, Boston, MA 02120. E-mail: jlinder@partners.org.

1. Williams SC, Schmaltz SP, Morton DJ, et al. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005;353(3):255-264.

2. Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals—the Hospital Quality Alliance program. N Engl J Med. 2005;353(3):265-274.

3. Pham JC, Kelen GD, Pronovost PJ. National study on the quality of emergency department care in the treatment of acute myocardial infarction and pneumonia. Acad Emerg Med. 2007;14(10):856-863.

4. Hospital Quality Alliance Web site. http://www.aha.org/aha_app/issues/HQA/index.jsp. Accessed July 30, 2008.

5. Centers for Medicare & Medicaid Services Hospital Quality Initiatives Web site. http://www.cms.hhs.gov/HospitalQualityInits/01_Overview.asp. Accessed July 30, 2008.

6. Wachter RM. Expected and unanticipated consequences of the quality and information technology revolutions. JAMA. 2006;295(23): 2780-2783.

7. Wachter RM, Flanders SA, Fee C, et al. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med. 2008;149(1):29-32.

8. Baum SG, Kaltsas A. Guideline tyranny: primum non nocere. Clin Infect Dis. 2008;46(12):1879-1880.

9. Thompson D. The pneumonia controversy: hospitals grapple with 4 hour benchmark. Ann Emerg Med. 2006;47(3):259-261.

10. Pines JM. Profiles in patient safety: antibiotic timing in pneumonia and pay-for-performance. Acad Emerg Med. 2006;13(7):787-790.

11. Pines JM. Measuring antibiotic timing for pneumonia in the emergency department: another nail in the coffin. Ann Emerg Med. 2007;49(5):561-563.

12. Seymann GB. Community-acquired pneumonia: defining quality care. J Hosp Med. 2006;1(6):344-353.

13. Pronovost PJ, Miller M, Wachter RM. The GAAP in Quality Measurement and Reporting. JAMA. 2007;298(15):1800-1802.

14. Metersky ML. Measuring the performance of performance measurement. Arch Intern Med. 2008;168(4):347-348.

15. Kanwar M, Brar N, Khatib R, et al. Misdiagnosis of communityacquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-hour antibiotic administration rule. Chest. 2007;131(6):1865-1869.

16. Drake DE, Cohen A, Cohn J. National hospital antibiotic timing measures for pneumonia and antibiotic overuse. Qual Manag Health Care. 2007;16(2):113-122.

17. Polgreen PM, Chen YY, Cavanaugh JE, et al. An outbreak of severe Clostridium difficile-associated disease possibly related to inappropriate antimicrobial therapy for community-acquired pneumonia. Infect Control Hosp Epidemiol. 2007;28(2):212-214.

18. Pines JM, Hollander JE, Lee H, et al. Emergency department operational changes in response to pay-for-performance and antibiotic timing in pneumonia. Acad Emerg Med. 2007;14(6):545-548.

19. Nawar EW, Niska RW, Xu J. National Hospital Ambulatory Medical Care Survey: 2005 Emergency Department Summary: Advance Data from Vital and Health Statistics; No. 386. Hyattsville, MD: National Center for Health Statistics; 2007.

20. National Center for Health Statistics. Public Use Microdata File Documentation, National Hospital Ambulatory Medical Care Survey, 2005. Hyattsville, MD: National Technical Information Service; 2007.

21. General Provisions Respecting Effectiveness, Efficiency, and Quality of Health Services. 42 USC §242m (2005).

22. Centers for Medicare & Medicaid Services, The Joint Commission. Specifications Manual for National Hospital Quality Measures. Version 1.02. 2005.

23. Koch H, Campbell W. The collection and processing of drug information. National Ambulatory Medical Care Survey, 1980. Vital Health Stat 2. 1982;(90):1-90.

24. Fine JM, Fine MJ, Galusha D, et al. Patient and hospital characteristics associated with recommended processes of care for elderly patients hospitalized with pneumonia: results from the Medicare quality indicator system pneumonia module. Arch Intern Med. 2002;162(7):827-833.

25. Mortensen EM, Cornell J, Whittle J. Racial variations in processes of care for patients with community-acquired pneumonia. BMC Health Serv Res. 2004;4(1):20.

26. Houck PM. Antibiotics and pneumonia: is timing everything or just a cause of more problems? Chest. 2006;130(1):1-3.

27. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med. 2008;168(4):351-356.

28. Pines JM, Hollander JE, Localio AR, et al. The association between emergency department crowding and hospital performance on antibiotic timing for pneumonia and percutaneous intervention for myocardial infarction. Acad Emerg Med. 2006;13(8):873-878.

29. Derlet RW, Richards JR. Overcrowding in the nation’s emergency departments: complex causes and disturbing effects. Ann Emerg Med. 2000;35(1):63-68.

30. Pines JM, Localio AR, Hollander JE, et al. The impact of emergency department crowding measures on time to antibiotics for patients with community-acquired pneumonia. 2007;50(5):510-516. Epub 2007 Oct 3.

31. Pines JM, Morton MJ, Datner EM, et al. Systematic delays in antibiotic administration in the emergency department for adult patients admitted with pneumonia. Acad Emerg Med. 2006;13(9):939-945.

32. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999;341(15):1147-1150.

33. Casalino LP, Alexander GC, Jin L, et al. General internists’ views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff (Millwood). 2007;26(2):492-499.

34. Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239-1244.

35. Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111-123.

36. File TM Jr, Gross PA. Performance measurement in communityacquired pneumonia: consequences intended and unintended. Clin Infect Dis. 2007;44(7):942-944.

37. Mitka M. JCAHO tweaks emergency departments’ pneumonia treatment standards. JAMA. 2007;297(16):1758-1759.

38. Mandell LA, Wunderink RG, Anzueto A, et al. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis. 2007;44(suppl 2):S27-S72.

Related Videos
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo