Publication

Article

The American Journal of Managed Care

November 2013
Volume19
Issue 11

Variations in the Service Quality of Medical Practices

Service quality (appointment lags and wait times) of primary care physician practices varies tremendously across the country and is associated with the organization of practices.

Objectives:

To examine regional variation in the service quality of physician practices and to assess the association of this variation with the supply and organization of physicians.

Study Design:

Secondary analyses of the Community Tracking Study (CTS) household and physician surveys.

Methods:

A total of 40,339 individuals who had seen a primary care physician because of an illness or injury and 17,345 generalist physicians across 4 survey time periods in 60 CTS sites were included. Service quality measures used were lag between making an appointment and seeing a physician, and wait time at the physician’s office. Our supply measure was the physician-to-population ratio. Our organizational measure was the percentage of physicians in group practices. Multivariate regressions were performed to examine the relationship between service quality and the supply and organization of physicians.

Results:

There was substantial variation in the service quality of physician visits across the country. For example, in 2003, the average wait time to see a doctor was 16 minutes in Milwaukee but more than 41 minutes in Miami; the average appointment lag for a sick visit in 2003 was 1.2 days in west-central Alabama but almost 6 days in Northwestern Washington. Service quality was not associated with the primary care physician-to-population ratio and had varying associations with the organization of practices.

Conclusions:

Cross-site variation in service quality of care in primary care has been large, persistent, and associated with the organization of practices. Areas with higher primary care physician-to-population ratios had longer, not shorter, appointment lags.

Am J Manag Care. 2013;19(11):e378-e385Cross-site variation in service quality (wait times and appointment lags) in primary care is large, persistent, and associated with the organization of practices.

  • This variation is robust to controls for patient characteristics.

  • Service quality is associated with the organization of practices.

  • Areas with higher primary care physician-to-population ratios had longer, not shorter, appointment lags.

The expansion of health insurance coverage to tens of millions of uninsured people has raised concerns about the adequacy of physician supply.1 An early report from Massachusetts, for example, suggested that waiting times to see primary care physicians had increased substantially after expansion of insurance coverage.2,3 Some research also suggested that the technical quality of medical care is higher in areas with more physicians,4 although this result was limited to generalist physicians.5

Yet there is little evidence on how the quality of the patient experience varies with physician supply. Later reports from Massachusetts suggested that reform had little impact on waiting times.6 Moreover, the relationship between physician supply and service quality may be tempered by how physicians are organized. Alternative organizations of practices may generate the same or better service quality with fewer physicians. 7-9 For example, service quality improvement tools (eg, electronic scheduling) may be more economical when physicians are organized into groups or participate in managed care organizations.

As the United States undergoes substantial changes to its healthcare system, including expansions in insurance coverage and changes to the delivery system, it is important to understand the landscape of service quality and to assess how that landscape may respond to changes in physician supply and organization. This study examines variations in 2 measures of service quality—waiting time and appointment lag—across the United States and assesses the association between these measures and the supply and organization of healthcare.

METHODS

We drew data from the Community Tracking Study (CTS) Household Surveys (1996-1997, 1998-1999, 2000-2001, and 2003)10 and Community Tracking Study Physician Surveys (1996-1997, 1998-1999, 2000-2001, and 2004-2005)11 conducted by the Center for Studying Health System Change. Service quality measures used: 1) lag between making an appointment and seeing a physician and 2) wait time at the physician's office. These repeated cross-sectional surveys include representative samples of the population of households and physicians across 60 sites in the United States (51 metropolitan areas and 9 nonmetropolitan areas). The response rates for the CTS Household Surveys in chronological order were 65%, 63%, 61%, and 56.5%. The response rates for the CTS Physician Surveys in chronological order were 65.4%, 60.9%, 58.6%, and 52.4%. While the public-use data files for the CTS Household Survey contain CTS site identifiers, the public-use data files for the CTS Physician Survey do not, so we used the restricted-use data files for the latter survey. We matched the CTS Household Survey data and the CTS Physician Survey data by CTS site and year (the CTS Household Survey data from 2003 were matched with the CTS Physician Survey data from 2004-2005).

The CTS data have several limitations. As with the General Practice Assessment Questionnaire, Medical Expenditure Panel Survey, and Consumer Assessment of Healthcare Providers and Systems, measures of waiting time to see the doctor and time between obtaining an appointment and seeing a doctor were obtained by patient self-report and were not independently verified. Sample sizes for both the household and physician surveys in some of the smaller CTS sites were relatively small. The match between physician and household surveys was not exact in 2003 because no physician survey was conducted in 2003. While CTS did conduct a national survey for households in 2007 and 2010 and for physicians in 2008, those data are not available at the local level and are not suitable for these analyses. Notably, these survey years do not include wait times in the doctor’s office and, compared with prior survey years, do not include the exact number of days between appointment and doctor’s visit.

We focused on sick visits to primary care physicians (PCPs) by adult insured patients. We focused on sick visits because appointment times for return visits may be scheduled well in advance. We concentrated on primary care visits, as primary care has been the main area of concern about physician supply adequacy. Multivariate analyses controlled for patient age, sex, race, health status, insurance type (Medicare, Medicaid, private), education, employment status, marital status, and income. Health status was measured on a 4-point scale, and we dichotomized it to “fair or poor” (relative to good or better health). We included a measure of having at least 1 hospital stay in the previous year. Insurance was classified as Medicare (with or without supplemental coverage), Medicaid, or private insurance (employer based or individually purchased). Those with military insurance or other public coverage and the uninsured were excluded from the analyses to create a more uniform sample for cross-site comparisons. We defined education as less than high school, high school, some college, and college or more. We defined full-time employment as working 35 hours a week or more.

We focused on 2 measures of service quality: (1) the interval between making an appointment and seeing a physician (appointment lag) and (2) time in the physician’s office before being seen (wait time). Both measures refer to a patient’s last visit. We limited analyses of appointment lags to patients with lags of 21 or fewer days to capture only true sick care visits. (Analyses of appointment lags longer than 21 days found that 84% of the lags occurred on 30-day intervals [30 days, 60 days, 90 days], suggesting that these were mainly scheduled follow-up visits. In analyses of patient satisfaction, the effect of appointment lag on satisfaction was negative and significant when lags were truncated at 21 days, but positive and significant without this truncation, again suggesting that long-scheduled visits were the main source of very long appointment lags.) This limitation excluded 12% of respondents; the basic results were not sensitive to the exclusion. We also repeated the analyses using alternative cutoffs (14 days and 30 days) and the log of the appointment lag. Results were very similar to those shown here.

Table 1

Prior literature suggests that these dimensions of service quality do matter to patients.12-14 We verified the significance of these measures in our data by relating them to CTS respondents’ assessment of whether they were “very satisfied” with their healthcare on a 5-point Likert scale. In multivariate linear regressions, both measures were statistically significant predictors of satisfaction ().

Our primary measure of physician supply was the PCP-to-population ratio, which we constructed using PCP supply estimates from the annually published Physician Characteristics and Distribution in the U.S. and population estimates from the US Census.15 In sensitivity analyses, we also examined the relationship of service quality to 2 alternative measures of supply: (1) the PCP-to-insured population ratio, constructed as above but adjusting the denominator to reflect the proportion of the population in the CTS household sample holding any type of health insurance, and (2) the percentage of physicians at a CTS site who reported that they accepted all new Medicare patients (implying that there are enough physicians in the area for physicians to be willing to accept additional patients at Medicare reimbursement levels). Since Physician Characteristics and Distribution in the U.S. did not have physician supply estimates for nonmetropolitan CTS sites, we excluded the 9 nonmetropolitan CTS sites in analyses using physician-to-population ratios.

One potential concern about the use of contemporaneous measures of physician supply in analyses of appointment lags and wait times is that the supply of physicians may respond to wait times and appointment lags. To assess the importance of this potential reverse causality, we calculated Pearson correlation coefficients between wait times and appointment lags in 1996 and the subsequent change in physician supply from 1996 to 2000 (and from 1996 to 2005).

The primary measure of physician organization used was the percentage of physicians in a CTS site in group practices of more than 10 physicians (group practices with 7 or more physicians are in the 75th percentile for the number of physicians in a practice). To separate the effects of organization from those of payment, we included in the regressions the average share of revenue in a CTS site from capitation. In our analyses we also controlled for the share of physicians employed by institutions such as hospitals or medical schools, as these organizations may have different incentives than do private practitioners.

To assess the extent of variation in service quality across the country, we computed patient risk-adjusted wait times and appointment lags by estimating average residuals by CTS site and year of each service quality measure after controlling for the patient sociodemographic characteristics listed above. We ordered the residuals by quartiles and mapped them. To assess the persistence of site-specific variation in wait times and appointment lags across survey years, we calculated Spearman’s rank correlation coefficients of the quartiles of these residuals.

Next, we calculated weighted means of the supply and organizational measures from the physician survey and matched these by year and CTS site to the household survey. Following the method of Fisher and colleagues, these variables were measured at the site-year level, which is appropriate as they reflected market-level conditions.16,17 In multivariate analyses, we examined the association between these CTS site-level supply and organizational variables and service quality.

We analyzed physician data in SUDAAN to adjust for the complex design of the CTS survey and for clustering at the CTS site level.18 Household data were analyzed using the complex survey modules in Stata release 10.0 to take advantage of statistical procedures in Stata not found in SUDAAN.19 These complex survey modules accounted for the clustering of observations at the site level and the repeated sampling of some individuals. For site-specific estimates of the household survey, the variance estimates from Stata were identical to those generated by SUDAAN.20 In multivariate analyses, we pooled data across all 4 survey years. In all analyses we included year dummies to capture time-varying effects, and we used robust standard errors to adjust for heterogeneity. We therefore ran regressions of the form:

Q = b0 + b1S + b2O + b3X + b4T + εi

where Q is one of the above measures of service quality, S is one of the above measures of physician supply, O is one of the above measures of physician organization, X is a vector of patient characteristics, T represents year dummies, and ε is the error term (taking into account the complex survey design).

RESULTS

Table 2

Our sample included 40,339 individuals who made primary care sick visits and 17,345 generalist physicians across 4 survey periods. Average wait time across the CTS sites was 26 minutes, and average appointment lag for a sick visit was about 3 days. On average, 17.9% of patients waited more than 30 minutes to see a doctor, and 9.0% had an appointment lag of more than 1 week. The average PCP-to-population ratio across CTS sites was 104. The majority of physicians accepted all new Medicare patients. About one-fifth of physicians were in large group practices. The average percentage of revenue from capitation was about one-fourth ().

There was a 2-fold or greater variation in both measures of service quality among CTS sites. For example, in 2003, the average wait time to see a doctor was 16 minutes in Milwaukee but more than 41 minutes in Miami. Similarly, the average appointment lag for a sick visit in 2003 was 1.2 days in west-central Alabama but almost 6 days in northwestern Washington and 5.4 days in Pittsburgh, Pennsylvania (data not shown).

Figure 1

Figure 2

Variation in service quality across CTS sites remained high and persistent over this period, even after controlling for patient characteristics ( and ). Risk-adjusting wait times and appointment lags by patient characteristics reduced the variance in wait times by about one-third but had little effect on the relative ordering of these residuals. Risk-adjusting appointment lags had very little effect on either the variance or the ordering of the residuals. There was considerable persistence within site in both waiting times and appointment lags. The Spearman’s correlation coefficients of the quartile ranks of patient characteristic—adjusted wait times were 0.56 between 1996 and 1998, 0.63 between 1998 and 2000, 0.59 between 2000 and 2003, and 0.60 between 1996 and 2003. Of the 15 CTS sites in the top quartile of wait times in 1996, 9 were still in the top quartile in 2003 and none had fallen to the bottom quartile. Similarly, the correlation coefficients of the quartile ranks of appointment lags were 0.53 between 1996 and 1998, 0.49 between 1998 and 2000, 0.23 between 2000 and 2003, and 0.32 between 1996 and 2003. Of the 15 CTS sites in the top quartile of appointment lags in 1996, 7 were still in the top quartile in 2003, and only 1 had fallen to the bottom quartile.

Table 3

Contrary to expectations, the physician-to-population ratio was not correlated with better service quality on either measure; in fact, higher PCP-to-population ratios were associated with longer appointment lags, although the effect size was modest (). In 2003, going from the area with the lowest ratio of physicians to population (56.4; Las Vegas, Nevada) to that with the highest ratio (174.9; San Francisco, California) was associated with an increase in appointment lag of a little more than half a day (relative to an average appointment lag of about 3 days). In sensitivity analyses, these results remained essentially unchanged (positive and statistically significant) when using the physician-to-insured population ratio or the average percentage of physicians in a site accepting all new Medicare patients as the measure of supply (results not shown). In analyses using the log of appointment lag, an increase of 10 physicians was associated with a statistically significant 2% increase in appointment lag (results not shown).

Organizational variables were somewhat more consistently associated with service quality (Table 3). Patients in CTS sites where a greater percentage of physicians were in large group practices had shorter wait times. Going from a site where no physicians were in large group practices to one where all of physicians were in such practices was associated with about a 14-minute shorter wait time (relative to an average wait time of 26 minutes). As expected, capitation payment was associated with longer appointment lags. Going from a site where no physicians were paid capitation to one where all physicians were paid capitation was associated with about a 1.5-day longer appointment lag.

Finally, we assessed the importance of potential reverse causality between appointment lags or wait times and physician supply by calculating Pearson correlation coefficients between wait times or appointment lags in 1996 and the subsequent change in physician supply from 1996 to 2000 (and from 1996 to 2003). We found negative or zero correlations. That is, areas with longer wait times or appointment lags in 1996 tended to see reductions in subsequent physician supply, not increases.

DISCUSSION

A growing literature examines the variation in the delivery of healthcare services across the country. The cost of care varies more than 2-fold nationwide, and the technical quality of care likewise varies substantially.21,22 Healthcare quality, though, also incorporates interpersonal aspects of care and the amenities (eg, timeliness, convenience) associated with such care.23 Many organizations such as the National Quality Forum have called for including patient-reported outcomes in measures of quality.24 This study found that similar variation exists in key dimensions of the service quality of the patient experience.

Our study found that variation in service quality is not principally related to variation in the supply of physicians, at least within the ranges of physician supply observed today. That was true whether we measured provider supply by physician-to-population ratios or by the willingness of existing providers to accept all new Medicare patients. Our findings are related to those of Nyweide and colleagues,25 who found that physician supply is not associated with seniors’ perception of their access to or quality of care. In fact, we found that higher physician supply was associated with longer appointment lags, although the magnitude of this relationship was modest. This result should be explored further to see whether it reflects different care patterns (ie, more visits per patient) or less efficient physician practice. In contrast, our findings suggest that the organization of medical practice does have a consistent relationship to service quality. We found that large group practices were associated with much shorter wait times.

Several efforts are under way to improve service quality. For example, optimization of the patient experience is one of the 3 dimensions of the Institute for Healthcare Improvement’s Triple Aim initiative.26 In addition, the Agency for Healthcare Research and Quality measures several aspects of service quality through its Consumer Assessment of Healthcare Providers and Systems program.27,28

Many ongoing organizational changes may already be improving service quality. Electronic medical records are being adopted by more physician practices,29 which may make visits more efficient by making past records easier to retrieve, thereby decreasing wait times and decreasing wait times. Increasing use of mid-level providers such as nurse practitioners may reduce appointment lags by allowing such providers to handle certain urgent visits.30 Hospitals increasingly are purchasing medical practices31; such practices may be able to optimize service quality through organizational changes implemented by the hospital. Further diffusion of the patient-centered medical home may also improve service quality. Nearly 5000 practices have been certified as medical homes, and this model is growing rapidly.32 However, these organizational changes may also worsen service quality; for example, electronic medical records and the purchasing of practices by hospitals may temporarily increase wait times and appointment lags as physicians and staff adjust to a new method of record-keeping or to new management, respectively.

A limitation of our study is that our most recent data are from surveys conducted in 2003 (for the Household Survey) and 2004-2005 (for the Physician Survey). The persistence in the cross-site variations in waiting times and appointment lags and the relationship of these service quality measures to organizational characteristics over the 1996 to 2003 period—a period of tumultuous organizational change—increase our confidence that our results remain valid today. To provide further confirmation, we compared the characteristics of physician practices in the 2005 physician survey with those reported in the most recent 2008 national CTS Physician Survey. The distribution of practices across the country was largely unchanged from 2005 to 2008: solo practices went from 35% of practices to 33%, health maintenance organization practices went from 5% to 4% of practices, and medical school and hospital-based practices went from 23% of practices to 20%. Meanwhile, the percent revenue from Medicare went from 30% to 31%, the percent revenue from Medicaid went from 15% to 17%, and the percent revenue from capitation went from 16% to 12%. That changes in practice distribution and in sources of revenue from 1996 to 2005 was the same as or greater than those from 2005 to 2008 argues for the persistence of our results.

In summary, we found that wait times and appointment lags were significantly related to patient satisfaction, and variation in both service quality measures was large and persistent. We found that organizational variables, but not supply variables, were associated with these outcomes. Although there may be setbacks during implementation, the organizational changes noted above may be the best hope to lower both wait times and appointment lags.Author Affiliations: From Massachusetts General Hospital (DPL), Boston, MA; Department of Health Policy and Management (SAG), Columbia University Mailman School of Public Health, New York, NY.

Funding Source: None.

Author Disclosures: The authors (DPL, SAG) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (SAG, DPL); acquisition of data (DPL); analysis and interpretation of data (SAG); drafting of the manuscript (SAG); critical revision of the manuscript for important intellectual content (SAG); statistical analysis (SAG, DPL); provision of study materials or patients (SAG); administrative, technical, or logistic support (SAG); and supervision (SAG)

Address correspondence to: dply@partners.org1. Department of Health Care Finance and Policy. Primary Care in Massachusetts: Overview of Trends and Opportunities. http://www.mass.gov/chia/docs/r/pubs/10/primary-care-report-in-massachusetts.ppt.Published July 2010. Accessed February 17, 2013.

2. Massachusetts Medical Society. MMS physician workforce study— 2008. http://www.massmed.org/Content/NavigationMenu/Newsand Publications/ResearchReportsStudies/PhysicianWorkforceStudy/MMS_Physician_Workf.htm. Published 2008. Accessed February 17, 2013.

3. Kowalczyk L. Across Mass., wait to see doctors grows. BostonGlobe. http://www.boston.com/news/health/articles/2008/09/22/across_mass_wait_to_see_doctors_grows/. Published September 22, 2008. Accessed February 17, 2013.

4. Cooper RA. States with more physicians have better-quality health care. Health Aff (Millwood). 2009;28(1):w91-w102.

5. Baicker K, Chandra A. Cooper’s analysis is incorrect. Health Aff (Millwood). 2009;28(1):w116-w118.

6. Massachusetts Medical Society. 2012 MMS patient access to care studies. http://www.massmed.org/News-and-Publications/Researchand-Studies/2012-MMS-Patient-Access-to-Care-Studies/#.UkCi-Qrz6GsM. Published August 2012. Accessed February 17, 2013.

7. Rossiter LF, Langwell K, Wan TT, Rivnyak M. Patient satisfaction among elderly enrollees and disenrollees in Medicare health maintenance organizations: results from the National Medicare Competition Evaluation. JAMA. 1989;262(1):57-63.

8. Dial TH, Palsbo SE, Bergsten C, Gabel JR, Weiner J. Clinical staffing in staff- and group-model HMOs. Health Aff (Millwood). 1995;14(2): 168-180.

9. Weiner JP. Forecasting the effects of health reform on US physician workforce requirements: evidence from HMO staffing patterns. JAMA. 1994;272(3):222-230.

10. Center for Studying Health System Change. CTS Household Survey and HSC Health Tracking Household Surveys. http://www.hschange .com/index.cgi?data=02. Updated September 23, 2013. Accessed February 17, 2011.

11. Center for Studying Health System Change. CTS Physician Surveys and the HSC 2008 Health Tracking Physician Survey. http://www .hschange.com/index.cgi?data=04. Updated September 23, 2013. Accessed February 17, 2011.

12. Fung CH, Elliott MN, Hays RD, et al. Patients’ preferences for technical versus interpersonal quality when selecting a primary care physician. Health Serv Res. 2005;40(4):957-977.

13. Leddy KM, Kaldenberg DO, Becker BW. Timeliness in ambulatory care treatment: an examination of patient satisfaction and wait times in medical practices and outpatient test and treatment facilities. J Ambul Care Manage. 2003;26(2):138-149.

14. Probst JC, Greenhouse DL, Selassie AW. Patient and physician satisfaction with an outpatient care visit. J Fam Pract. 1997;45(5):418-425.

15. American Medical Association. Physician Characteristics and Distribution in the U.S. Chicago, IL: American Medical Association; 2013.

16. Fisher ES, Wennberg DR, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending, part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4): 273-287.

17. Fisher ES, Wennberg DR, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending, part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288-298.

18. Research Triangle Institute. SUDAAN User’s Manual, Release 8.0. Research Triangle Park, NC: Research Triangle Institute; 2001.

19. StataCorp. Stata Statistical Software [computer program]. Release 10. College Station, TX: StataCorp LP; 2007.

20. Schaefer E, Potter F, Williams S, Diaz-Tena N, Reschovsky JD, Moore G; for the Center for Studying Health System Change. Comparison of Selected Statistical Software Packages for Variance Estimation in the CTS Surveys. Community Tracking Study. http://www.hschange .com/CONTENT/575/575.pdf. Published May 2003. Accessed September 2, 2012.

21. Congressional Budget Office (CBO). Geographic Variation in Health Care Spending. http://www.cbo.gov/ftpdocs/89xxdoc8972/02-15-Geog Health.pdf. Published February 2008. Accessed February 17, 2013.

22. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26): 2635-2645.

23. Donabedian A. The quality of care: how can it be assessed? JAMA. 1988;260(12):1743-1748.

24. Hostetter M, Klein S. Using patient-reported outcomes to improve health care quality. Quality Matters. The Commonwealth Fund. http:// www.commonwealthfund.org/~/media/Files/Newsletters/Quality%20 Matters/QM_2011_Dec_Jan.pdf. Published December 2011/January 2012. Accessed February 17, 2013.

25. Nyweide DJ, Anthony DL, Chang C, Goodman D. Seniors’ perceptions of health care not closely associated with physician supply. Health Aff (Millwood). 2011;30(2):219-227.

26. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769.

27. Agency for Healthcare Research and Quality, American Institutes for Research, Harvard Medical School, Rand Corporation. The CAHPS® Clinician & Group Survey. Submitted to the National Quality Forum. http://www.aqaalliance.org/October24Meeting/PerformanceMeasurement/ CAHPSCGtext.doc. Published July 13, 2006. Accessed February 17, 2013.

28. Rodriguez HP, von Glahn T, Rogers WH, Safran DG. Organizational and market influences on physician performance on patient experience measures. Health Serv Res. 2009;44(3):880-901.

29. Jamoom E, Beatty P, Bercovitz A, Woodwell D, Palso K, Rechtsteiner E. Physician adoption of electronic health record systems: United States, 2011. NCHS Data Brief. 2012;(98):1-8.

30. Cassidy A. Nurse practitioners and primary care. Health Policy Brief. HealthAffairs. Robert Wood Johnson Foundation. http://healthaffairs.org/healthpolicybriefs/brief_pdfs/healthpolicybrief_79.pdf. Published October 25, 2012. Accessed February 16, 2013.

31. Dolan PL. Physician practice purchases already surpass 2010 levels. American Medical News. http://www.ama-assn.org/amednews/2011/ 10/31/bise1102.htm. Published November 2, 2011. Accessed February 17, 2013.

32. National Committee For Quality Assurance. Patient-Centered Medical Home Program; January 23-24, 2013; New Orleans, Louisiana. http://www.ncqa.org/EducationEvents/SeminarsandWebinars/LiveSeminarsWebinars/2013FacilitatingPCMHRecognition.aspx. Accessed February 17, 2013.

Related Videos
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo