Publication
Article
The American Journal of Managed Care
Author(s):
A 12-month evaluation of a patient-centered medical home demonstration indicated improvement in quality of care without an increase in overall costs.
Background:
A patient-centered medical home (PCMH) demonstration was undertaken at 1 healthcare system, with the goals of improving patient experience, lessening staff burnout, improving quality, and reducing downstream costs. Five design principles guided development of the PCMH changes to staffing, scheduling, point-of-care, outreach, and management.
Objective:
To report differences in patient experience, staff burnout, quality, utilization, and costs in the first year of the PCMH demonstration.
Study Design:
Prospective before and after evaluation.
Methods:
Baseline (2006) and 12-month (2007) measures were compared. Patient and staff experiences were measured using surveys from a random sample of patients and all staff at the PCMH and 2 control clinics. Automated data were used to measure and compare change components, quality, utilization, and costs for PCMH enrollees versus enrollees at 19 other clinics. Analyses included multivariate regressions for the different outcomes to account for baseline case mix.
Results:
After adjusting for baseline, PCMH patients reported higher ratings than controls on 6 of 7 patient experience scales. For staff burnout, 10% of PCMH staff reported high emotional exhaustion at 12 months compared with 30% of controls, despite similar rates at baseline. PCMH patients also had gains in composite quality between 1.2% and 1.6% greater than those of other patients. PCMH patients used more e-mail, phone, and specialist visits, but fewer emergency services. At 12 months, there were no significant differences in overall costs.
Conclusions:
A PCMH redesign can be associated with improvements in patient experience, clinician burnout, and quality without increasing overall cost.
(Am J Manag Care. 2009;15(9):e71-e87)
Redesign of a patient-centered medical home (PCMH) was done with the goals of improving patient experience, lessening staff burnout, improving quality, and reducing downstream costs.
Improving the delivery of primary care is high on the healthcare reform agenda in the United States and other industrialized nations. Evidence shows that when health systems emphasize primary care, patients achieve better outcomes at lower cost.1 Compared with other countries, US healthcare costs significantly more2 and has large gaps in coverage, wide variation in quality, and poorer patient experiences.3 Primary care physicians leave the workforce sooner than specialists4 and complain of a hectic work environment,5,6 and fewer medical trainees choose primary care careers.7
The patient-centered medical home (PCMH), a new model of primary care, is widely regarded as a potential solution to these problems.8,9 This model of practice redesign emphasizes the core attributes of primary care (access, longitudinal relationships, comprehensiveness, and coordination), promotes the chronic care model, maximizes the use of advanced information technology, and aligns reimbursement methods with improved patient access and outcomes.10 Despite growing enthusiasm and desire that the PCMH be fast-tracked, more information on its performance is needed.11 Based on early experiences from a national demonstration project, Nutting and colleagues caution that wholepractice transformation is required, even in highly motivated practices, along with significant resource investment.12 We describe a multifaceted PCMH demonstration at Group Health Cooperative, a large, nonprofit integrated delivery system, and the changes observed in its first year.
SETTING AND CONTEXT
Group Health provides healthcare insurance and comprehensive care to approximately half a million residents in the northwestern United States. Twenty primary care clinics are located in western Washington State, where patients choose a primary care physician to guide and coordinate their care. These physicians (81.6% family physicians, 3.5% general internists, and 14.9% pediatricians) care for an average of 2300 patients and work in multidisciplinary teams. For every 3 physicians, the teams include 4 medical assistants (or licensed practical nurses), 1 registered nurse, 0.5 physician assistants (or nurse practitioners), and 0.3 clinical pharmacists. The primary care clinics have on-site pharmacies, laboratories, and radiology suites. A system of 4 specialty clinics, 6 urgent care/emergency departments, and 7 hospitals (1 owned and operated and 6 contracted) support these primary care clinics.
Between 2002 and 2006, Group Health implemented a series of reforms to improve efficiency and access,13 including same-day appointment scheduling, direct access to some specialists, primary care redesign to enhance care efficiency, variable physician compensation (salaries with relative value unit [RVU] incentives), and an electronic medical record with a patient Web portal to enable patient e-mail, online medication refills, and record review. The reforms succeeded in improving patient access and satisfaction,14,15 but also increased physician workload, as evidenced by larger panel sizes, greater resource intensity per face-to-face visit, and increasing adoption of patient e-mail.16 These workload changes, combined with the implementation of the electronic medical record, resulted in fatigue and decreased work satisfaction.15 Relative reductions also were seen in nationally reported quality-ofcare indicators as well as downstream utilization increases in specialty care, emergency care, and inpatient days.15
To counter these trends, Group Health sought to pilot a PCMH redesign in a single metropolitan Seattle clinic serving 9200 adult patients with the goal of spreading lessons learned to other clinics. (The pilot excluded pediatrics because the intervention clinic served relatively few children.) The clinic was chosen based on its modest size, leadership stability, and history of prior successful practice changes. The objectives of demonstration were to (1) maintain or enhance patient care experience, (2) reduce physician and care team burnout, (3) improve clinical quality scores, and (4) reduce emergency, specialty, and avoidable hospitalization use and costs.
PCMH DESIGN AND IMPLEMENTATION
Design Principles
Table 1
Through 2 participatory design workshops involving leaders, providers, managers, and patients, 5 design principles were established to guide the selection and implementation of the design components. These principles were based on a review of the attributes of primary care,17 the chronic care model,18 and the medical home.8,9,19 provides details.
Change Prerequisites
To allow the clinic to incorporate the design components into their daily work, Group Health made substantial workforce investments to reduce physician panels from an average of 2327 patients to 1800, expand the visits from 20 to 30 minutes, and allocate daily “desktop medicine” time for staff to perform outreach, coordination, and other activities. Compared with usual staffing, staffing was increased by 15% for physicians, 44% for physician assistants, 17% for registered nurses, 18% for medical assistants (or licensed practical nurses), and 72% for clinical pharmacists. To accommodate the panel size reductions, 25% of patients were reassigned to other physicians.
Change Components
Appendix A
Throughout 2007, clinic leaders and staff implemented a variety of point-of-care, outreach, and management changes to support the design principles (Table 1; see for more details). Some components were developed de novo and others were available to all clinics, but emphasized at the PCMH clinic. Of particular note, the clinic systematized the use of team huddles, previsit outreach and chart review, and use of patient-centered quality deficiency reports. The PCMH clinic emphasized both e-mail and telephone encounters (as an alternative or complement to in-person visits), depending on patient abilities and preferences. Throughout the year, staff engaged in team-based rapid process improvements to refine and integrate the change components into their day-to-day work. Finally, physicians were paid by a salary-only model and were exempted from RVU-based adjustments.
METHODS
Evaluation Design
Using measures defined in advance, we designed a prospective, 2-group, before and after evaluation of the PCMH pilot during its first year of implementation (January 1 through December 31, 2007). We assessed and compared change components and outcomes at baseline and 12 months for patients and staff at the PCMH clinic compared with patients and staff at other clinics. We took advantage of automated clinical and administrative data for comparisons with all 19 other clinics on the PCMH change components, continuity, quality of care, utilization, and costs. By contrast, because of feasibility and cost constraints, we limited the comparisons for the surveybased measures (patient experience and staff burnout) to 2 control clinics, selected based on similarities in size, Medicare enrollment, and leadership stability. All study procedures were approved by the Group Health Institutional Review Board.
Data Collection
Appendix B
Change Components. We assessed 8 change components that could be measured using automated data across all clinics during the baseline (2006) and implementation (2007) years (). These components included the use of secure e-mail threads, telephone calls, group visits, and calls to the 24-hour consulting nurse service. (Although the PCMH sought to increase use of e-mail and calls to the care team, it sought to decrease nonemergent calls to the consulting nurse service by redirecting them to the care team when possible.) We also measured whether patients completed electronic health risk assessments (e-HRAs) and enrolled in peer-led self-management support workshops.20 Previsit outreach was assessed by identifying those patients who received a well-care visit during the study year and who received an e-mail in the 14 days before the visit. In order to exclude e-mail activity that was part of another care episode, we excluded patients with an in-person visit in the 30 days before their well-care visit. Emergency department follow-up was measured by the presence of a provider-initiated e-mail or a telephone call to a patient within 3 days of an emergency visit.
Because a main principle of the PCMH was to strengthen physician-patient relationships, we also compared continuity of primary care in the baseline and intervention years, using the Bice-Boxerman Continuity of Care (COC) Index.21 The COC Index measures the degree that care is concentrated with a single provider and accounts for the number of visits and different providers seen. Because continuity measures may be spuriously elevated when patients make few visits,22,23 we limited this measure to adult patients with 3 or more visits in both study years.
Patient Experience. The sampling frame for the patient experience survey included insured adults age 21 to 85 years who were paneled at the PCMH or at 2 control clinics. Between September and December 2006, we randomly sampled 6187 adult enrollees by mailing them a questionnaire. Respondents were followed up at 12 months. Patient experience was assessed by using 5 domains of the Ambulatory Care Experiences Survey (ACES) Short Form: access, quality of doctor-patient interactions, shared decision making, coordination of care, and helpfulness of physician office staff.24,25 We supplemented the ACES with 2 subscales from the Patient Assessment of Chronic Illness Care (PACIC) survey26: the degree to which patients reported being involved in their own care (patient activation/involvement) and the degree to which care teams helped set and refine healthcare goals (goal setting/tailoring).
Staff Burnout. Between October and December 2006, all staff with patient care responsibilities at the PCMH and 2 control clinics were asked to complete an online baseline survey; follow-up surveys were administered in November and December 2007. We used the 22-item Maslach Burnout Inventory to measure 3 dimensions of burnout: emotional exhaustion, depersonalization, and personal accomplishment.27 In addition to using continuous variables, we grouped the scales into high, moderate, and low categories using normative cut points for medical workers.27 Because of the small numbers of staff by type at the PCMH clinic, they were aggregated to 2 groups: physicians/physician assistants versus all other clinical staff.
Appendix C
Appendix D
Quality of Care. We used routinely collected clinical data to assess markers of quality of care for all adults enrolled at the PCMH clinic and the other 19 clinics in the baseline (2006) and implementation (2007) years. The markers included 22 indicators specified by the Healthcare Effectiveness Data and Information Set28 (). These indicators were selected because they are common measures of clinical quality and could be operationalized using automated data. The measures assess screening (4 measures), chronic illness care (14 measures), and medication monitoring (4 measures). We included the cohort of patients who were continuously enrolled (for at least 9 months in 2006 and 3 months in 2007) and qualified for at least 1 indicator in both years at the PCMH (n = 5442) and the 19 other clinics (n = 148,727). Because multiple indicators are unwieldy and different composites can lead to different conclusions,29 we aggregated the indicators into 4 different composite measures with the patient as the unit of analysis (). This measurement approach is consistent with the patient-centered orientation of the PCMH. The “patient average” computes the percentage of indicators that were achieved for each patient. The 100% performance measure reflects the percentage of patients who achieved success on all qualifying indicators. The 75% and 50% composites are less stringent versions and assess the percentage of patients who achieved success on fewer indicators.
Utilization and Costs. Data on enrollees’ health services use and cost in the baseline (2006) and implementation (2007) years were obtained from a Group Health information system, which captures and allocates utilization and costs for all services at Group Health facilities and from external claims. The cost allocation system allows both the determination of costs of specific encounters and the aggregation of costs for individuals over time. Costs excluded from the allocation include those not directly related to delivering health services (eg, insurance costs) and patient out-of-pocket costs. Group Health collects nominal cost data that were annualized for individuals not enrolled in Group Health for the entire year using the formula: cost × (12/months enrolled). All reported costs are in 2005 inflation-adjusted US dollars using the local Medical Care Price Index from the US Bureau of Labor Statistics. All of PCMH implementation costs (ie, staffing costs) were allocated to patients enrolled at the PCMH clinic.
We compared utilization and cost in the implementation year between adult enrollees at the PCMH clinic and at the other 19 clinics on total cost, primary care, specialty care, emergency department, and inpatient care contacts and costs. The adult population included had at least 6 months of enrollment at Group Health at baseline and at least 3 months of enrollment in the implementation year. To account for the fact that enrollees may transfer between clinics, we defined the clinic location as the one where they were enrolled for the longest period during the implementation year. Primary care included all in-person visits to family physicians, general internists, physician assistants, and nurse practitioners. Specialty care included ambulatory visits to all other physicians except emergency medicine, which are allocated to the emergency department. Inpatient care included all professional and facility costs associated with at least 1 overnight hospital stay. We also examined inpatient utilization for “ambulatory care sensitive conditions,” where primary care can potentially prevent the need for hospitalization.30
Analysis
For the surveys, we compared categorical patient and staff characteristics between the PCMH and 2 control clinics using 2 tests and continuous characteristics with t tests assuming unequal variances between clinic groups. To evaluate differences for the 7 patient experience scales at baseline, we performed linear regression adjusting for baseline age, education, and self-reported health status. For differences at 12 months, we also adjusted for baseline scores. For the composite quality measures, we performed paired t tests to compare changes between baseline and implementation years for qualifying patients at the PCMH and 19 other clinics. We used 2-sample t tests, assuming unequal variances to compare average differences across the 2 years. To assess the frequency of use of e-mail, telephone visits, and consulting nurse calls, we used multivariate Poisson regressions, adjusting for overdispersion and for patient age, sex, and a diagnosis-based DxCG case-mix score,31 calculated with automated diagnosis data from the baseline year. DxCG scores group and weight diagnoses into clinical groups with similar resource expectations. Because use of group visits and self-management support workshops was relatively low throughout, we used multivariate logistic regressions to estimate relative risk, adjusting for the same patient characteristics and baseline attendance. For e-HRA use, previsit outreach, and emergency follow-up, which were prevalent, we used a modified Poisson regression with robust standard errors to estimate the relative risk.32 We applied the same modified Poisson regression with robust standard errors for the COC Index, choosing cut points of 0.33 (median) and 0.66 (75th percentile). For all change component models (including the COC Index), we adjusted for patient age, sex, DxCG score, and baseline. We compared annualized utilization in 2007 for patients at the PCMH and 19 other clinics using a multivariate Poisson regression, adjusting for overdispersion and case mix (age, sex, and DxCG scores). Models were used to estimate adjusted rate ratios of annualized utilization in the implementation year between the PCMH and other patients. We estimated annualized costs in 2007 and differences between patients at the PCMH and 19 other clinics using a multivariate linear regression (adjusting for age, sex, and annualized 2006 costs) with an error term from a gamma distribution to adjust for heteroskedastic residuals.33 All analyses were performed using SAS version 9.1 (SAS Inc, Cary, NC) and Stata version 10 (StataCorp, College Station, TX). Two-tailed tests with P <.05 were considered statistically significant.
RESULTS
Change Components
Table 2
At baseline, PCMH patients were on average 2 years older than patients at the other 19 clinics (mean age 53.0 vs 50.7 years; P <.001) and were less likely to be male (43.4% vs 44.9%; P <.001), but their mean DxCG scores did not differ (P = .378). After controlling for case mix, PCMH patients engaged in 94% more e-mail threads during the intervention year, 12% more telephone calls, and 10% fewer phone calls to the consulting nurse service compared with other patients (). PCMH patients also were more likely to use group visits (relative risk = 5.9), self-management support workshops (relative risk = 2.16), and the e-HRA (relative risk = 4.53). Compared with other patients, PCMH patients with well-care visits were 9.8 times more likely to have an e-mail in the prior 14 days. Similarly, patients seen in the emergency department were 1.89 times more likely to have a telephone call or e-mail within 3 days after the visit. After adjusting for baseline, we also observed a small statistically significant increase (9%) in continuity of care at the PCMH compared with patients at the 19 other clinics. (This latter analysis was limited to patients with 3 or more in-person visits in both years.)
Patient Experience
Of the 6187 adults randomly sampled at the PCMH and 2 control clinics, we could not contact 57 (1%) patients due to invalid address, death, severe illness, or language barrier; 162 (3%) declined to participate; and we were unable to locate 2547 (41%). Among 3421 patients who returned the baseline survey (response rate = 55%), we excluded 5 who returned 2 surveys and 63 living with another respondent. Sex distribution did not differ between respondents and nonrespondents to the baseline survey. However, respondents were on average almost 10 years older than nonrespondents; therefore, they were more likely to have Medicare insurance, had a higher DxCG score, and had a longer enrollment history. However, differences between respondents and nonrespondents were similar at the PCMH and control clinics. Among the 3353 baseline respondents, the mean age was 60 years (SD = 15 years), 62% were female, 84% white, 3% Hispanic, 77% had at least some college education, and 18% reported being in fair to poor health. At 12 months, 2686 of the baseline respondents also returned the follow-up survey (response rate = 80%). The remainder did not complete the survey because of death or severe illness (10), disenrollment (72), refusal (56), or nonresponse (529).
Table 3
Table 4
shows that, compared with controls, respondents to the 12-month survey at the PCMH clinic were on average 2 years older and more highly educated, and reported better overall general health at baseline. shows the average scores on the 7 patient experience measures at baseline. After adjusting for baseline differences in age, education, and selfreported health status, PCMH patients reported significantly better experience with their care at baseline in the quality of doctor-patient interactions and access to care (P <.05). No significant differences were seen in the other scales.
Comparing 12 months with baseline, PCMH patients reported significantly higher scores on 4 of the 7 patient experience subscales (Table 4). In the control clinics, significantly higher scores were detected on 2 subscales. After adjusting for age, educational attainment, self-reported health status, and baseline experience, PCMH patients reported significantly better care experiences on 6 of 7 subscales than controls, particularly for care coordination, access, and patient activation and involvement.
Staff Burnout
Table 5
Baseline surveys were administered to all staff at the PCMH (n = 46) and 2 control clinics (n = 86), with the response rate higher at the PCMH than at the control clinics (87% vs 70%). At 12 months, follow-up surveys were administered to 99 clinical staff, and 82 responded (response rate = 83%). The percentage of respondents who were age 55 years or older was greater at the PCMH than at the control clinics, but no significant differences were seen in the percentage who were female or the proportion who were physicians or physician assistants ().
At baseline, reports of emotional exhaustion did not differ significantly between the PCMH and control clinics, with one-third of all staff reporting high emotional exhaustion. At 12 months, emotional exhaustion was less frequent at the PCMH clinic, with only 10% reporting high burnout compared with 30% of controls. When physicians and physician assistants were examined separately, emotional exhaustion was similar at baseline for the PCMH and control clinics (mean 24.9% vs 28.3%; P = .60) but substantially less at the PCMH at follow-up (mean 14.2% vs 35.2%; P <.001).
Quality of Care
Table 6
In the baseline year, 68.9% and 67.6% of adult patients enrolled at the PCMH clinic and the 19 other clinics, respectively, qualified for at least 1 quality indicator (mean = 2.5; range = 0, 15). PCMH patients qualified for slightly fewer indicators than those at other clinics (mean = 1.80 vs 1.88), but this difference was not statistically different. compares the performance of the 4 composite measures created by aggregating these indicators. At baseline, we found that the PCMH clinic performed better on each of the composite measures compared with 19 other clinics (P <.001). For example, the average percentage of indicators achieved across patients at the PCMH clinic (“the patient average”) was 68% compared with 64% at other clinics. Regardless of the composite measure chosen, we found statistically significant improvements at the PCMH clinic ranging from 3.7% to 4.4% during the intervention year. Significant improvements (2.0%-2.7%) also were seen at the comparison clinics. Despite being higher at baseline, composite quality gains at the PCMH clinic were between 1.2% and 1.6% greater (P <.05) than those for patients enrolled in the other 19 clinics.
Utilization and Costs
Table 7 reports adjusted estimates for health services utilization in the implementation year (2007) for enrollees at the PCMH clinic compared with adults enrolled at the other 19 clinics. In comparison, PCMH patients received 6% fewer of the longer in-person primary care visits but 8% more specialty care visits (P <.001). PCMH patients also had 29% fewer emergency department visits than patients at other clinics (P <.001). Overall inpatient admissions did not differ significantly between the PCMH and other clinics, but patients at the PCMH clinic had 11% fewer hospitalizations for ambulatory-care—sensitive conditions (P <.001).
The cost results followed the same patterns as utilization, except for primary care costs, which were approximately $16 more per patient per year for adults at the PCMH clinic than for those at other clinics, despite the fact that PCMH patients had fewer primary care visits. Specialty care also cost $37 more for the PCMH clinic, although the difference was borderline significant (P = .06). However, we estimated that emergency department costs were $54 less for the PCMH clinic. Totaling costs across all components of care, we found no statistically significant overall cost differences between the PCMH and other clinics.
DISCUSSION
In its first year of implementation, the PCMH demonstration delivered primary care very differently than 19 other clinics. In particular, adults at the PCMH experienced fewer in-person primary care visits (6%) than did patients in other clinics, but significantly more secure e-mail message exchanges (94%) and telephone calls (12%) with their care teams. Slight increases also were seen in continuity of primary care, despite the necessity to repanel 25% of the patients to other physicians. We also witnessed early adoption of many of the PCMH change components including previsit outreach, emergency department follow-up, group visits, self-management support workshops, and e-HRA use.
Consistent with expectations,34 we saw fewer out-ofoffice urgent contacts including telephone advice to consulting nurses (10%) and fewer emergent and urgent care visits (29%)—and early indications of a difference in the rate of hospitalizations for ambulatory-care—sensitive conditions (11%).30 Unlike preliminary data from another PCMH demonstration that focused on Medicare enrollees,35 we detected no reduction in the rate of all-cause admissions. This may be because discharge rates in the local area are in the bottom decile compared with other geographies across the country,36 making such reductions unlikely. Alternatively, this effect may be most pronounced among older Medicare patients or because of differences in how the PCMH was designed and implemented. One unexpected and as-yet unexplained finding was the 8% greater use of specialty care visits compared with controls. The interface between a PCMH and specialty care may be particularly challenging, and we are studying this further.
Changes in utilization resulted in no detectable net difference in mean total healthcare costs among individuals enrolled at the PCMH clinic compared with other patients. We estimated that significant investment of an estimated $16 per patient per year was made in primary care (particularly for staffing), but it appears that this investment was recouped quickly (within 12 months), thanks to shifts in patients’ care utilization, particularly in savings from less use of emergency care.
We also detected improvements in most aspects of patients’ experience of care that we measured, including patient involvement in their care, goal setting and tailoring, and care coordination. We believe that these findings are relatively robust to Hawthorne effects, because patients were not generally informed of the practice redesign. Although the changes in patient experience seen were relatively small, they are notable given the extent of patient repaneling, the number of practice changes implemented, and the fact that practice changes were made throughout the year.
In addition to systemwide improvements in quality, we found greater improvements in the composite measures of clinical quality at the PCMH, indicating improvements across multiple conditions and clinical situations. This finding is consistent with the PCMH objective of comprehensiveness, approaching care improvement from a patient-centered rather than a disease-centered perspective. During the implementation, all clinics were pressed to improve quality and patient experience, which may have narrowed our ability to detect larger changes comparing the PCMH and control clinics.
With fewer and longer in-person visits and more designated time to do outreach, primary healthcare teams seemed able at 12 months to integrate e-mail messages, telephone visits, and proactive care activities into their everyday workflow. We showed a significant decline in provider burnout even among our small group of physicians. PCMH staff reported less than half the rates of emotional exhaustion compared with the controls at 12 months, and dramatically lower rates among physicians. This finding may be especially meaningful because burnout motivates many providers to leave primary care. Qualitative studies of PCMH provider experience revealed that after the first year they were experiencing a more supportive work environment, stronger connections to patients and to each other, and a greater sense of accomplishment from providing better care across multiple dimensions of quality (J. T. Tufano, MHA, J. D. Ralston, MD, MPH, P. Tarczy-Hornoch, MD, R. J. Reid, MD. PhD, unpublished data). The main strength of our evaluation is that we had access to comprehensive data regarding a wide range of outcomes.
The main limitations are the quasi-experimental evaluation design with the intervention conducted at a single clinic and the variable response rate to our surveys. We attempted to limit threats to internal validity by adjusting for baseline differences, but residual selection bias is possible. In addition, our evaluation was performed during the implementation year when practice changes were being made. We also recognize that the results may not be generalizable to other settings, particularly those located outside integrated systems and without electronic medical records. Similarly, due to the largely capitated nature of Group Health’s enrollment and its largely salaried physicians, the applicability to fee-for-service settings is unclear. Furthermore, because the PCMH pilot included introducing and promoting many redesign components, we are unable to determine what intervention components were responsible for the effects.
CONCLUSION
Evaluation of a 12-month demonstration of a PCMH in an integrated group practice demonstrated significant improvements in patients’ and providers’ experiences and in the quality of clinical care. Despite the significant monetary investment in the PCMH redesign, the costs were recouped within the first year.
Acknowledgments
Researchers and staff at the Group Health Research Institute were involved in the design and conduct of the evaluation; collection, management, analysis, and interpretation of the evaluation data; and preparation, review, and approval of the manuscript. We would like to thank Scott Armstrong, MBA; Alicia Eng, RN; Suzanne Spencer, MD; Claire Trescott, MD; Michael Erikson, MSW; and Harry Shriver, MD, for their leadership at Group Health and at the PCMH pilot clinic. We also would like to thank David Grossman, MD, MPH, Edward Wagner, MD, MPH, and Katie Coleman, MSPH, for reviewing earlier drafts of the manuscript; Kelly Ehrlich, MS, for project management; and Rebecca Hughes and Eve Adams for help with preparation. Finally, we are grateful to the physicians, nurses, pharmacists, medical assistants, and other clinical staff who participated in the PCMH pilot.
Author Affiliations: From Group Health Research Institute (RJR, PAF, OY, TRR, JTT, MPS, EBL), Group Health Cooperative, Seattle, WA; and the School of Medicine (JTT, EBL) and the School of Public Health and Community Medicine (RJR, PAF, EBL), University of Washington, Seattle, WA.
Funding Source: All funding for the demonstration project and evaluation was provided by Group Health Cooperative.
Author Disclosure: PAF, OY TRR, JTT, and EBL are employees of Group Health Cooperative. RJR and MPS are employees of Group Health Permanente, the affiliated medical group of Group Health Cooperative. RJR and MPS also report owning stock in Group Health Permanente.
Authorship Information: Concept and design (RJR, PAF, MPS, EBL); acquisition of data (PAF, TRR, JTT); analysis and interpretation of data (RJR, PAF, OY, TRR, JTT, MPS); drafting of the manuscript (RJR, PAF, OY, JTT, MPS); critical revision of the manuscript for important intellectual content (RJR, PAF, OY, TRR, JTT, EBL); statistical analysis (PAF, OY); administrative, technical, or logistic support (MPS), and supervision (RJR).
Address correspondence to: Robert J. Reid, MD, PhD, Group Health Research Institute, Group Health Cooperative, 1730 Minor Ave, Ste 1600, Seattle, WA 98101. E-mail: reid.rj@ghc.org.
1. Starfield B, Shi L, Macinko J. Contribution of primary care to health systems and health. Milbank Q. 2005;83(3):457-502.
2. Public Policy Committee of the American College of Physicians, Ginsburg JA, Doherty RB, Ralston JF Jr, et al. Achieving a highperformance health care system with universal access: what the United States can learn from other countries [published correction appears in Ann Intern Med. 2008;148(8):635]. Ann Intern Med. 2008;148(1):55-75.
3. Schoen C, Osborn R, Doty MM, Bishop M, Peugh J, Murukutla N. Toward higher-performance health systems: adults’ health care experiences in seven countries, 2007. Health Aff (Millwood). 2007;26(6):w717-734.
4. Lipner RS, Bylsma WH, Arnold GK, Fortna GS, Tooker J, Cassel CK. Who is maintaining certification in internal medicine—and why? A national survey 10 years after initial certification. Ann Intern Med. 2006;144(1):29-36.
5. Hauer KE, Durning SJ, Kernan WN, et al. Factors associated with medical students’ career choices regarding internal medicine. JAMA. 2008;300(10):1154-1164.
6. Linzer M, Manwell LB, Williams ES, et al; MEMO (Minimizing Error, Maximizing Outcome) Investigators. Working conditions in primary care: physician reactions and care quality. Ann Intern Med. 2009;151(1):28-36, W6-9.
7. National Residency Matching Program. Results and Data: 2008 Main Residency Match. April 2008. http://www.nrmp.org/data/resultsanddata2008.pdf. Accessed September 29, 2008.
8. American College of Physicians. The advanced medical home: a patient-centered, physician-guided model of health care. Policy paper. American College of Physicians; 2006.
9. American Academy of Family Physicians. Joint principles of a patient-centered medical home released by organizations representing more than 300,000 physicians. March 5, 2007. Press release. http://
www.aafp.org/online/en/home/media/releases/2007/20070305pressrelease0.html. Accessed June 20, 2008.
10. Berenson RA, Hammons T, Gans DN, et al. A house is not a home: keeping patients at the center of practice redesign. Health Aff (Millwood). 2008;27(5):1219-1230.
11. Barr MS. The need to test the patient-centered medical home. JAMA. 2008;300(7):834-835.
12. Nutting PA, Miller WL, Crabtree BF, Jaen CR, Stewart EE, Stange KC. Initial lessons from the first national demonstration project on practice transformation to a patient-centered medical home. Ann Fam Med. 2009;7(3):254-260.
13. Ralston JD, Martin DP, Anderson ML, et al. Group Health Cooperative’s transformation toward patient-centered access. Med Care Res Rev. June 23, 2009. Epub ahead of print.
14. Ralston JD, Carrell D, Reid R, Anderson M, Moran M, Hereford J. Patient web services integrated with a shared medical record: patient use and satisfaction [published correction appears in J Am Med Inform Assoc. 2008;15(2):265]. J Am Med Inform Assoc. 2007;14(6):798-806.
15. Robert Wood Johnson Foundation. Improving Access to Improve Quality: Evaluation of an Organizational Innovation. Washington, DC: AcademyHealth; 2008.
16. Conrad D, Fishman P, Grembowski D, et al. Access intervention in an integrated, prepaid group practice: effects on primary care physician productivity. Health Serv Res. 2008;43(5 pt 2):1888-1905.
17. Starfield B. Primary Care: Balancing Health Needs, Services and Technology. New York: Oxford University Press; 1998.
18. Wagner EH, Austin BT, Davis C, Hindmarsh M, Schaefer J, Bonomi A. Improving chronic illness care: translating evidence into action. Health Aff (Millwood). 2001;20(6):64-78.
19. Davis K, Schoenbaum SC, Audet AM. A 2020 vision of patient-centered primary care. J Gen Intern Med. 2005;20(10):953-957.
20. Lorig KR, Sobel DS, Ritter PL, Laurent D, Hobbs M. Effect of a selfmanagement program on patients with chronic disease. Eff Clin Pract. 2001;4(6):256-262.
21. Bice TW, Boxerman SB. A quantitative measure of continuity of care. Med Care. 1977;15(4):347-349.
22. Steinwachs DM. Measuring provider continuity in ambulatory care: an assessment of alternative approaches. Med Care. 1979;17(6):551-565.
23. Smedby O, Eklund G, Eriksson EA, Smedby B. Measures of continuity of care. A register-based correlation study. Med Care. 1986;24(6):511-518.
24. Safran DG, Karp M, Coltin K, et al. Measuring patients’ experiences with individual primary care physicians. Results of a statewide demonstration project. J Gen Intern Med. 2006;21(1):13-21.
25. Rodriguez HP, von Glahn T, Rogers WH, Chang H, Fanjiang G, Safran DG. Evaluating patients’ experiences with individual physicians: a randomized trial of mail, internet, and interactive voice response telephone administration of surveys. Med Care. 2006;44(2):167-174.
26. Glasgow RE, Wagner EH, Schaefer J, Mahoney LD, Reid RJ, Greene SM. Development and validation of the Patient Assessment of Chronic Illness Care (PACIC). Med Care. 2005;43(5):436-444.
27. Maslach C, Jackson SA, Leiter MP. Maslach Burnout Inventory Manual. 3rd ed. Mountain View, CA: Consulting Psychologists Press; 1996.
28. National Committee for Quality Assurance. HEDIS 2008: Healthcare Effectiveness Data & Information Set. Vol. 2, Technical Specifications. Washington, DC: National Committee for Quality Assurance; 2007.
29. Reeves D, Campbell SM, Adams J, Shekelle PG, Kontopantelis E, Roland MO. Combining multiple indicators of clinical quality: an evaluation of different analytic approaches. Med Care. 2007;45(6):489-496.
30. Bindman AB, Grumbach K, Osmond D, et al. Preventable hospitalizations and access to health care. JAMA. 1995;274(4):305-311.
31. DxCG Inc. DxCG RiskSmart Clinicial Classifications Guide. Boston, MA: DxCG Inc; 2007.
32. Zou G. A modified Poisson regression approach to prospective studies with binary data. Am J Epidemiol. 2004;159(7):702-706.
33. Manning WG, Basu A, Mullahy J. Generalized modeling approaches to risk adjustment of skewed outcomes data. J Health Econ. 2005;24(3):465-488.
34. Fisher ES. Building a medical neighborhood for the medical home. N Engl J Med. 2008;359(12):1202-1205.
35. Paulus RA, Davis K, Steele GD. Continuous innovation in health care: implications of the Geisinger experience. Health Aff (Millwood). 2008;27(5):1235-1245.
36. The Dartmouth Atlas of Health Care Interactive Data Tool. 2008. http://www.dartmouthatlas.org/index.shtm. Accessed October 7, 2008.