Publication
Article
Population Health, Equity & Outcomes
Author(s):
Given that accountable care organizations (ACOs) will be rated on patient experience and wait times for specialist consults are associated with patient satisfaction, ACOs should monitor this process.
ABSTRACT
Objectives: The Medicare accountable care organization (ACO) program financially rewards ACOs for providing high-quality healthcare, and also factors in the patient experience of care. This study examined whether administrative measures of wait times for specialist consults are associated with self-reported patient satisfaction.
Study Design: Analyses used administrative and survey data from a clinically integrated healthcare system similar to an ACO.
Methods: Veterans Health Administration (VHA) data from 2012 was obtained. Administrative access metrics included the number of days between the creation of the consult request and: 1) first action taken on the consult, 2) scheduling of the consult, and 3) completion of the consult. The Survey of Healthcare Experiences of Patients—which is modeled after the Consumer Assessment of Healthcare Providers and Systems family of survey instruments used by ACOs to measure patient experience—provided the outcome measures. Outcomes included general VHA satisfaction measures and satisfaction with timeliness of care, including wait times for specialists and treatments. Logistic regression models predicted the likelihood of patients reporting being satisfied on each outcome. Models were risk adjusted for demographics, self-reported health, and healthcare use.
Results: Longer waits for the scheduling of consults and completed consults were found to be significantly associated with decreased patient satisfaction.
Conclusions: Because patients often report high levels of powerlessness and uncertainty while waiting for consultation, these wait times are an important patient-centered access metric for ACOs to consider. ACOs should have systems and tools in place to streamline the specialist consult referral process and increase care coordination.
The American Journal of Accountable Care. 2017;5(1):23-28
The Medicare accountable care organization (ACO) program financially rewards ACOs for providing high-quality healthcare. Participating providers are then financially rewarded if healthcare spending is kept below targets set by Medicare, which can be achieved by the prevention of medical errors and duplication of services.1,2 The success of ACOs assumes that a structure of clinical integration and appropriately targeted incentives will improve coordination of care and quality.3 A key measure of quality of care is patient experience of care, including perceived coordination. Using the Consumer Assessment of Healthcare Providers and Systems (CAHPS) family of survey instruments, ACOs are required to collect information on patient experiences, such as the ability to obtain timely care and access to specialists.4
Care coordination between primary and specialty care has received little attention despite the fact that specialty care accounts for more healthcare resource use than primary care, and specialists outnumber primary care physicians in the United States.5 Primary care and specialist providers report inadequate communication between each other about referrals, which compromises their ability to provide high-quality care6 and may also have a negative impact on self-reported patient satisfaction.
Patients report high levels of uncertainty and powerlessness during the period of time between a requested referral and subsequent action, as they wait for clarity on disease outcomes.7-9 Consequently, if patients experience inadequate coordination between primary and specialty care, their experiences may suffer. Previous research has found that shorter wait times for appointments are not always the most important priority for patients. Patients are willing to wait longer to get an appointment at a convenient time or to see a preferred provider, especially for low-worry, long-standing conditions. Yet, when there is a new health concern, faster access becomes a higher priority.10-12
The Veterans Health Administration (VHA)—the largest clinically integrated system similar to an ACO in the United States that coordinates primary and specialty care—is a source of data that can provide important insights into the effect of the consult process on patient experience. In 2014, the VHA had over 9 million enrollees and provided or coordinated over 92 million outpatient encounters.13 Since 2009, the VHA has consistently measured patient experiences and satisfaction with the Survey of Healthcare Experience of Patients (SHEP) Survey that is modeled after the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Survey that ACOs use to measure patient experiences.4,14 This paper investigates the effect of several different measures of VHA consult wait times on self-reported patient satisfaction. This effect on patient satisfaction provides a specific point of intervention for ACOs to consider when thinking of ways to measurably improve patient experiences.
METHODS
As discussed in detail below, analyses used administrative data on wait times for consults in the VHA to predict self-reported patient satisfaction with care. Study approval for human subjects was obtained from the VA Boston Healthcare System Institutional Review Board.
Administrative Consult Wait Time Measures
The VHA’s electronic consult system was implemented in 1999 and its use is mandated for all consultation requests (according to internal VHA data). Data from this study was extracted from fiscal year (FY) 2012. The consult system automatically records time stamps when consult-related administrative events occur. These events can be used to understand the number of elapsed days between consult creation and first action and/or scheduling. An additional measure included total consult resolution time (Figure and Table 1); in other words, consults are considered resolved when the appointment (if performed) has been completed and the report is written and signed. Time stamps are also recorded when the consult is updated, discontinued, or returned to the sending service for clarification. When consults are returned for clarification of the request, the consult wait time clock is not reset. The time stamps include the time from consult creation through appointment scheduling, and eventually to consult completion. In contrast, discontinued consults stop the clock.
Standardization of the consult system enables distinctions between documents used for traditional clinical consultation and those used for other administrative purposes, such as for requesting transportation.15 With this in mind, we narrowed the 2012 data to focus on consults for clinical services by excluding administrative consults and non-VHA care consults.
Following our previous work, the wait times were weighted by a national proportion based on FY2011 data. Weights were developed based on the frequency of different consult appointments. If a station did not have a consult request for every type of appointment in a month, the remaining appointment weights were adjusted so they would sum to 100.16,17 Wait time measures were used in 2 ways in statistical models: 1) as a continuous variable; and 2) categorized roughly into quartiles, with the lowest quartile used as the reference group.
Sample Selection
Consult wait times were linked to self-reported satisfaction using the 2012 Survey of Healthcare Experiences of Patients (SHEP), which is modeled after the Consumer Assessment of Healthcare Providers and Systems (CAHPS) family of survey instruments. Managed by the VHA Office of Analytics and Business Intelligence, SHEP is an ongoing nationwide survey that seeks to obtain patient feedback on recent episodes of VHA inpatient or outpatient care. For outpatient care, a simple random sample of patients with completed appointments at VHA facilities is selected each month (according to internal VHA Support Service Center data). The overall response rate was 53% and respondents came from all VHA medical centers (n = 130).
Different sample selection rules were applied to each consult wait measure. First, all individuals who had the visit date in SHEP match the date for a completed/updated consult were flagged. In addition, we required the station (medical center code) in SHEP to match the station of the completed consult. This was the sample for the completed consult wait measure (n = 28,328). The wait computed was the facility average for all resolved consults in that month. Not surprisingly, because a clinic visit triggers a patient to be eligible to be contacted for SHEP, 90% of the individuals in this sample had a completed/updated status compared with a discontinued or canceled status.
For the next 2 measures—days to first action and days to scheduled consult—all individuals who had a visit date in SHEP match the date a consult was initiated were flagged (n = 44,387). The receiving (not the sending) station for these requests was linked because receiving stations actually do the scheduling. We computed a facility average wait time looking forward for all consults requested at that receiving station in the month.
Patient Satisfaction Dependent Variables
Satisfaction measures were selected and operationalized following previous work.18 Satisfaction with timeliness of care was measured by asking respondents how often they were able to get VHA appointments as soon as they thought they needed care, excluding times they needed urgent care. Access to VHA tests, treatments, and appointments with VHA specialists was measured by asking how easy it was to get this care in the last 12 months. Response options for the above 3 measures included “always,” “usually,” “sometimes,” and “never”; we estimated the likelihood of answering always/usually compared with sometimes/never. We also examined more general satisfaction measures that wait times for consults may influence. General satisfaction is measured by asking respondents to rate VHA healthcare in the last 12 months on a scale of 0 to 10 and their satisfaction with their most recent VHA visit using a Likert scale ranging from 1 to 7 (higher numbers indicate greater satisfaction). We estimated the likelihood of a 9 or 10 rating compared with less than a 9 on the first measure, and the likelihood of a 6 or 7 compared with less than a 6 on the second measure.
Risk Adjustors
Risk adjustors included age, gender, race/ethnicity, education level, number of visits to a doctor’s office in the last 12 months, and self-reported health status—all obtained from SHEP FY2012. Models also included month fixed effects to control for secular changes in wait times and a VHA medical center random effect to control for facility quality and case-mix differences.
Analyses
STATA version 10.0 (Statacorp, College Station, Texas) was used to estimate logistic regression models that predicted the dichotomized patient satisfaction variables.
RESULTS
The SHEP respondents selected for this sample reflect the larger VHA patient population. Respondents were predominantly male, in poor health, and frequent healthcare users. There is evidence of high satisfaction with VHA care, with nearly 80% of respondents reporting they usually or always received appointments, treatment, or specialist care in a timely fashion. Additionally, 81% expressed the top 2 highest satisfaction levels for the most recent visit and 58% expressed the highest satisfaction levels with VHA care in the last 12 months (Table 2).
There is significant variation between facilities regarding consult wait times. Facilities in the top quartile have waits that are more than 10 days longer than facilities in the lowest quartile (33.5 days vs 23 days) for consult completion. Facilities in the highest quartile took about a week longer to schedule appointments in response to a consult request than facilities in the lowest quartile. There was very little variation in time to first action, with less than a half-day difference in the highest quartile compared with the lowest quartile (Table 3).
The measures for completed consult and time to scheduled consult have strong and consistent relationships with patient satisfaction (Table 4). Generally, there is a linear relationship with satisfaction decreasing for patients who visit facilities with longer waits (the higher quartiles of wait times). Patients who visit facilities in the highest quartiles of wait times are significantly less satisfied than patients who visit facilities in the lowest quartile for these 2 measures. Sensitivity analyses that included wait times as a continuous measure found that longer waits were significantly associated with lower satisfaction on these measures for every outcome except the model using the completed consult wait to predict the overall VHA satisfaction measure (data not shown). There was no significant relationship between the days to first action measure and patient satisfaction.
DISCUSSION
This study finds a consistent relationship between measures of consult wait times and patient satisfaction. Longer waits between initial request and either scheduling of the consult or the completion of the consult are associated with poorer satisfaction. Generally, there was a stronger negative relationship between waits and satisfaction measures that were specifically related to accessing care, treatments, or specialists compared with more general satisfaction measures. There was no relationship between the waits for time to first action and patient satisfaction.
These findings are consistent with previous research that validated access metrics for appointments using patient satisfaction as an outcome. Prentice et al (2014) found that different types of access metrics predicted patient satisfaction for new and returning patients.18 Specifically, the wait time between appointment creation and appointment completion for new, but not established, patients strongly predicts patient satisfaction. One reason this relationship pertains to new patients but not established ones may be because new patients typically want to be seen right away due to emerging health concerns. This previously validated measure is consistent with the time to scheduling and time to completion measures used here that predict the same relationship between consult waits and patient satisfaction. Patients being referred to specialty care are likely to have new problems and want to be seen as soon as possible after the need is identified.
The findings from this study also expand our understanding of administrative access metrics that are patient centered. In contrast to the measures, days to scheduled consult and days to completed consult, the days to first action measure had no relationship with patient satisfaction. This metric largely measures “behind the scenes” processes of transfer and scheduling of consults. Patients repeatedly reported a sense of powerlessness and uncertainty, as well as a feeling of “living their life on hold” when waiting for diagnosis or treatment that is compounded by a lack of information from the healthcare system.7-9 As ACOs and the VHA put a greater emphasis on the experiences of patients, these findings suggest that metrics should focus on measuring tangible processes that patients easily understand as action being taken on their behalf, such as scheduling appointments.
Given that longer waits for consults negatively impact patient experience, ACOs should have systems and tools in place to streamline the consult referral process and increase care coordination. These tools include policy, training, appropriate technological support tools, and organizational culture. Greenberg et al (2014) argue for the importance of developing a collaborative care agreement that delineates the roles of the referring and consulting provider in the preconsult exchange, the actual consultation, and any co-management of the patient required after the consult is completed.5 Others argue that ACOs need to specifically focus on improving the structure of care coordination. This includes appropriate training to all providers and care teams on best practices in care coordination, technological support tools to help providers get a complete picture of a patient’s care, as well as to achieve care coordination goals and a culture that prioritizes such coordination, including protected time during the workday for care coordination activities.3,19
Limitations
The main limitation of this study is that we cannot definitely state that the relationship between longer consult wait times and lower patient satisfaction is causal because omitted variables may be responsible. Our models attempted to minimize this possibility by including month fixed effects to control for secular changes in wait times and a medical center random effect to control for facility quality and case-mix differences. On the other hand, research has repeatedly found a relationship between longer wait times and poorer outcomes, including decreased patient satisfaction and poorer health outcomes in a variety of veteran populations and time periods, thus strengthening the inference that the relationship may be causal.16-18,20-22
Another limitation is that the study sample is largely elderly and male, so results may not be generalizable to other patient populations. Finally, wait times for all types of consults may not have the same impact on patient satisfaction. Due to data availability at the time of the study, our measures included all clinical consults. Long waits for a recommended preventive screening (eg, colonoscopy) may not have the same effect on patient satisfaction as waits for consults that are a result of new health concerns. Future work should consider these nuances to determine consult access that has the largest impact on patient experiences.
CONCLUSIONS
The consult process occurs at an anxiety-producing time for patients. Findings from this study suggest that certain types of consult waits that can be easily obtained from the scheduling system are strong predictors of patient satisfaction. As ACOs reorganize to become more patient-centered, better management of consult waits has the potential to significantly improve patient satisfaction.
Acknowledgments
The authors are indebted to Aaron Legler for programming support.
Author Affiliations: Partnered Evidence-based Policy Resource Center (SDP, JCP), Boston VA Healthcare System, Boston, MA and Office of Veterans Access to Care (MLD), Department of Veterans Affairs, Washington, DC; Department of Pharmacy and Health System Sciences, Northeastern University (SDP), Boston, MA; School of Medicine and Public Health, Boston University (JCP), Boston, MA.
Source of Funding: Funding for this research was provided by the Access Clinic and Administration Program (now the Office of Veterans Access to Care) in the Department of Veterans Affairs.
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (SDP, MLD, JCP); acquisition of data (SDP, MLD, JCP); analysis and interpretation of data (SDP, JCP); drafting of the manuscript (MLD, JCP); critical revision of the manuscript for important intellectual content (SDP, MLD); statistical analysis (SDP); obtaining funding (MLD); administrative, technical, or logistic support (SDP, MLD); and supervision (SDP, JCP).
Send Correspondence to: Julia C. Prentice, PhD, VA Boston Health Care System, 150 South Huntington Ave, Mail Stop 152H, Boston, MA 02130. E-mail: jprentic@bu.edu.
REFERENCES
1. Accountable care organizations (ACOs): general information. CMS website. https://innovation.cms.gov/initiatives/aco/. Updated December 15, 2016. Accessed December 9, 2015.
2. McWilliams JM, Landon BE, Chernew ME, Zaslavsky AM. Changes in patients’ experiences in Medicare accountable care organizations. N Engl J Med. 2014;371(18):1715-1724. doi: 10.1056/NEJMsa1406552.
3. Press MJ, Michelow MD, MacPhail LH. Care coordination in accountable care organizations: moving beyond structure and incentives. Am J Manag Care. 2012;18(12):778-780.
4. CAHPS Survey for Accountable Care Organizations (ACOs) Participating in Medicare Initiatives—2016 ACO-9 Survey version required (English). CMS website. http://acocahps.cms.gov/globalassets/aco---epi-2-new-site/pdfs-for-aco/survey-instruments/2016-aco-survey/english/2016_aco-9_mail_survey_english.pdf. Accessed December 9, 2015.
5. Greenberg JO, Barnett ML, Spinks MA, Dudley MJ, Frolkis JP. The “medical neighborhood”: integrating primary and specialty care for ambulatory patients. JAMA Intern Med. 2014;174(3):454-457. doi: 10.1001/jamainternmed.2013.14093.
6. O’Malley AS, Reschovsky JD. Referral and consultation communication between primary care and specialist physicians: finding common ground. Arch Intern Med. 2011;171(1):56-65. doi: 10.1001/archinternmed.2010.480.
7. Fogarty C, Cronin P. Waiting for healthcare: a concept analysis. J Adv Nurs. 2007;61(4):463-471. doi: 10.1111/j.1365-2648.2007.04507.x.
8. Hansen BS, Rørtveit K, Leiknes I, et al. Patient experiences of uncertainty—a synthesis to guide nursing practice and research. J Nurs Manag. 2012;20(2):266-277. doi: 10.1111/j.1365-2834.2011.01369.x.
9. Rittenmeyer L, Huffman D, Godfrey C. The experience of patients, families, and/or significant others of waiting when engaging with the healthcare system: a systematic qualitative review. JBI Database Systematic Rev Implement Rep. 2014;12(8):198-258. doi: 10.11124/jbisrir-2014-1664.
10. Rubin G, Bate A, George A, Shackley P, Hall N. Preferences for access to the GP: a discrete choice experiment. Br J Gen Pract. 2006;56(531):743-748.
11. Gerard K, Salisbury C, Street D, Pope C, Baxter H. Is fast access to general practice all that should matter? a discrete choice experiment of patient preferences. J Health Serv Res Policy. 2008;13(suppl 2):3-10. doi: 10.1258/jhsrp.2007.007087.
12. Salisbury C, Goodall S, Montgomery AA, et al. Does advanced access improve access to primary health care? questionnaire survey of patients. Br J Gen Pract. 2007;57(541):615-621.
13. National Center for Veterans Analysis and Statistics. Selected Veterans Health Administration characteristics: FY2002 to FY2014. Veterans Affairs website. http://www.va.gov/vetdata/Utilization.asp. Published September 28, 2014. Accessed December 9, 2015.
14. VHA facility quality and safety report fiscal year 2012 data. Veterans Affairs website. www.va.gov/health/docs/vha_quality_and_safety_report_2013.pdf. Published December 2013. Accessed December 15, 2015.
15. Waiting for care: examining patient wait times at VA. U.S. House of Representative Committee on Veterans Affairs Hearing. House Committee on Veterans Affairs website.
https://veterans.house.gov/sites/republicans.veterans.house.gov/files/documents/113-11_0.pdf. Published March 14, 2013. Accessed July 10, 2015.
16. Prentice J, Pizer SD. Delayed access to health care and mortality. Health Serv Res. 2007;42(2):644-662.
17. Prentice JC, Pizer SD. Waiting times and hospitalizations for ambulatory care sensitive conditions. Health Serv Outcomes Res Methodol. 2008;8(1):1-18. doi: 10.1007/s10742-007-0024-5.
18. Prentice JC, Davies ML, Pizer SD. Which outpatient wait-time measures are related to patient satisfaction? Am J Med Qual. 2014;29(3):227-235. doi: 10.1177/1062860613494750.
19. Oliver P, Bacheller S. ACOs: what every care coordinator needs in their tool box. Am J Accountable Care. 2015;3(3):18-20.
20. Pizer SD, Prentice JC. What are the consequences of waiting for health care in the veteran population? J Gen Intern Med. 2011;26(suppl 2):676-682. doi: 10.1007/s11606-011-1819-1.
21. Prentice JC, Fincke BG, Miller DR, Pizer SD. Outpatient wait time and diabetes care quality improvement. Am J Manag Care. 2011;17(2):e43-e54.
22. Prentice JC, Dy S, Davies ML, Pizer SD. Using health outcomes to validate access quality measures. Am J Manag Care. 2013;19(11):e367-e377.