Publication
Peer-Reviewed
Population Health, Equity & Outcomes
Author(s):
Implementing a policy change to require preappointment surveys before scheduling initial clinic evaluations can improve wait-list times and show rates.
ABSTRACT
Objectives: Previous research has demonstrated that having patients complete an optional preappointment survey can increase their likelihood of attending their appointment. However, there is no literature examining how requiring preappointment engagement affects outcomes. The current study aimed to investigate the impact of mandatory preappointment surveys on patient show rates and wait-list times and provide guidance for implementing data-driven policy change.
Study Design: This study examined show rates and wait-list times during the 1 year before and 1 year following a policy change requiring new patients to complete preappointment surveys before they are scheduled. The χ2 test of homogeneity was used to determine changes between pre– and post–policy change show rates, and an independent t test was used to examine changes in wait-list time.
Methods: This study examined the medical records of 275 youth with intake appointments at an interdisciplinary chronic pain management clinic at a large hospital. A retrospective chart review was conducted to determine changes in patient show rates and wait-list times.
Results: Findings demonstrated that patient show rates increased from 78.8% to 86.1% after the policy change, and average wait-list time decreased by 55.2% from the year before the policy change.
Conclusions: This study’s findings provide evidence that requiring patients to complete a preappointment survey before being scheduled significantly improved show rates and decreased wait-list times in a pediatric pain clinic. Providers should balance benefits with potential limitations, such as restricting access to care, when implementing such a policy change. This study also offers practical guidance for implementing data-driven policy change in health care settings.
Am J Manag Care. 2025;31(Spec. No. 3):SP120-SP126. https://doi.org/10.37765/ajmc.2025.89705
Patients who miss their scheduled clinic appointments, also known as no-shows, can create problems for medical providers across different specialties. When patients fail to attend their appointments, it can negatively affect clinical productivity, billable hours, revenue generation, physician-patient relationships, wait-list times, and patient satisfaction.1,2 Various strategies have been proposed to address the issue of patient no-shows and lengthy wait-lists. Clinics and practices often use different forms of appointment reminders, such as letters, phone calls, SMS text reminders, and emails.3-5 These reminder systems have been effective; however, improvements in show rates have not always been sustained.6
A recent study investigated the use of an optional preappointment survey to increase the show rate of patients attending their intake at an outpatient pain clinic.7 This study utilized the psychological principle of the foot-in-the-door technique,8 which suggests that if patients comply with a small request (eg, completing the preappointment survey), they are more likely to comply with a larger demand (eg, attending their appointment). The researchers compared the show rate data from the year before with data from the year after the clinic implemented the preappointment survey. Although there was only a small increase in the overall show rates (from 73.7% to 75.6%), a closer analysis revealed that 97.2% of patients who completed the preappointment survey attended their appointments, whereas only 36.2% of patients who did not complete the preappointment survey showed to their appointments.
These results indicate that utilizing an optional preappointment survey is not sufficient to improve overall clinic show rates because it would still include the patients who did not complete their survey and were likely not to attend their appointment. Instead, policy may need to capitalize on the high attendance rates of patients who complete preappointment surveys by requiring new patients to complete preappointment surveys before being scheduled.
Purpose/Aims
The current study is the first to our knowledge to evaluate how implementing mandatory preappointment surveys may impact clinic utilization and access to care. We aimed to understand how this policy change can affect show rates and access to care for youth being evaluated in a chronic pain management clinic. We hypothesized that requiring completion of preappointment surveys before scheduling would (1) increase patient show rates and (2) decrease patient wait-list times.
METHODS
Settings
This research was conducted at an interdisciplinary pediatric chronic pain management clinic at Johns Hopkins All Children’s Hospital in St Petersburg, Florida. The clinic schedules 3 to 5 weekly slots for new patients, allocating 2 hours for each appointment. Patients are usually referred to this clinic for concerns related to chronic pain lasting longer than 3 months, including diagnoses such as juvenile fibromyalgia, amplified musculoskeletal pain, chronic headaches, and other widespread pain concerns. During the initial evaluation, a multidisciplinary team consisting of an anesthesiologist, physical therapist, and psychologist conducts a comprehensive assessment of the patient’s pain history, psychosocial functioning, and daily life limitations. The team also provides personalized education and treatment recommendations. Ethical approval for this research was obtained from the Johns Hopkins All Children’s Hospital Institutional Review Board.
Participants
Patients were scheduled for their intake evaluation with the interdisciplinary chronic pain management clinic. The researchers examined patient show rates and wait-list times in the year before and after implementing a policy change that required patients to complete preappointment surveys in order to be scheduled. Patients who did not speak English (n = 8), who were from international or out-of-state locations (n = 6), or whose sessions were canceled by providers or administration (n = 4) were excluded from this study.
Measures
Show rate. Show rate was defined as the percentage of patients who attended their scheduled intake appointments. If patients missed their scheduled appointments, this was considered a no-show. Same-day cancellations were also recorded as no-shows because they prevented providers from seeing other patients during that scheduled time.
Wait-list. The wait-list refers to the length of time patients waited for their clinic appointments. This variable was calculated by counting the days between when the patient was contacted to schedule their initial appointment and the actual appointment date.
Preappointment survey. The preappointment survey comprised approximately 40 questions, encompassing open-ended and closed-ended questions for patients and their caregivers. The questions covered topics such as pain history, medical background, the impact of pain on daily life, beliefs about pain, and mood. The survey was hosted on the Qualtrics platform and could be completed via smartphone or computer. Data from the survey platform indicated that participants took a median time of 26 minutes (range, 7-71) to complete the survey.
Procedures
A retrospective chart review examined appointment attendance and wait-list time for each patient. We analyzed patients scheduled for the clinic 1 year before and 1 year after implementing the new policy that required patients to complete a preappointment survey prior to being scheduled for an initial clinic visit.
Once patients were referred to the pain clinic, a nurse coordinator from the clinic called the family to inform them of the referral. The coordinator obtained their email address and sent them a link for the preappointment survey. Starting after the policy had been implemented, new families were given an additional verbal prompt via phone call that they could not schedule an appointment with the clinic until the survey was completed. The nurse coordinator monitored Qualtrics for new survey responses. Once a response was received, the referral was accepted into the organization’s electronic health record system, and scheduling instructions were entered. The referral was placed in a work queue, and patients were called to schedule their initial appointment. When families did not complete the survey, they received a reminder phone call at 2 weeks, 30 days, and 4 months. If there was no response after 12 months, the referral was sent to a deferred queue where it could be accessed and retrieved if the family called back.
Data Analysis
Demographic data were retrieved from the participants’ electronic health records. The data were described overall and according to pre– and post-policy change using mean and SD for age and counts and percentages for sex and race.
The first hypothesis was tested by comparing show rate data from the year before the mandatory preappointment survey policy change with show rate data from the year after the change. These categorical variables were tallied and represented as counts and percentages. The χ2 homogeneity test was used to determine whether there was a significant difference between the pre– and post–policy change show rates. Logistic regression analyses were conducted to examine the impact of policy change on show rates while controlling for demographic factors such as sex and race. Analyses were performed in a sequential block format. Model 1 included only the policy change variable; model 2 added sex as a predictor; model 3 incorporated race as a categorical variable using dummy codes for Latino, Black, and Asian participants; and model 4 included interaction terms to assess the potential moderating effects of sex on race. Regression coefficients were examined to identify significant predictors of show rates, and the overall model fit was evaluated using Nagelkerke R2, with Hosmer-Lemeshow tests to assess model adequacy.
The second hypothesis was tested by comparing wait-list times during the year before the mandatory preappointment survey policy change with wait-list times during the year after the change. The continuous variables were summarized as means and SEs. An independent samples t test with a bias-corrected and accelerated (BCa) bootstrap from 1000 samples and the Levene test for equality of variance were used to determine whether there was a statistically significant difference in wait times between the pre– and post–policy change groups. Cohen d was calculated to determine effect size.
Statistical tests were 2-sided with a significance level of 0.05. All statistical analyses were conducted with IBM SPSS Statistics 28 (IBM).
RESULTS
The records of 275 patients were reviewed to assess the patients’ characteristics before and after the policy change (Table 1). There were 168 patient slots in the year before the policy change and 125 patient slots in the year after; however, 8 and 10 patients, respectively, were excluded based on our exclusion criteria. All appointments were attended by both patients and caregivers. The mean (SD) age of the patients was 14.7 (2.8) years; they ranged in age from 5 to 21 years. Most patients were female (76.4%) and White (79.3%). There were no statistically significant differences in age, sex, or race distributions between the pre– and post–policy change periods.
Show Rate
The overall patient show rate increased from 78.8% (126/160) before the policy change to 86.1% (99/115) after the policy change. These data indicate that 34 patient no-shows (21.3%) occurred before the policy change and 16 patient no-shows (13.9%) occurred afterward. A χ2 homogeneity test was conducted to examine the difference in show rates between the 2 study periods. The proportions did not significantly differ based on the implementation of the policy (n = 275; χ21 = 2.05; P = .15).
To further assess potential factors influencing no-shows, an independent samples t test was conducted to compare age before and after the policy change, and it revealed no significant differences (t48 = 1.42; P = .16). Additionally, χ2 tests indicated no significant association between sex (n = 50; χ21 = 0.12; P = .49) or race (n = 50; χ21 = 5.87; P = .88) distributions and the policy change period (Table 2).
A logistic regression was conducted to examine the impact of the policy change on show rates while controlling for demographic factors. The first block of the logistic regression included only the policy change variable, and it demonstrated no significant change in show rate after the policy change (χ21 = 2.48; P = .12). In the second block, sex was added to the model. The omnibus test did not show a significant improvement in model fit (χ21 = 0.76; P = .38) and explained only 1.9% of variance (Nagelkerke R2 = 0.02).
Race was added to the model in the third block, resulting in the overall model becoming statistically significant (χ25 = 22.62; P < .001) and explaining 12.9% of the variance in show rates (Nagelkerke R2 = 0.13). The Hosmer-Lemeshow test demonstrated the model adequately fit the data (χ25 = 3.2; P = .67). In this model, Latino patients had significantly lower odds of attending their appointments compared with White patients, with an 88% reduction in odds (OR, 0.12; P < .001). Similarly, Black patients were 63% less likely to attend their appointments than White patients (OR, 0.37; P = .04). No significant differences related to sex were observed in this model. In the final model, interaction terms between race and sex were added, but they did not improve the model (χ23 = 3.1; P = .37). Although no significant interactions were found, significantly lower show rates remained for Latino patients (OR, 0.02; P = .02). Regression analyses are summarized in Table 3.
Wait-List Time
An independent samples t test with a BCa bootstrap from 1000 samples was run to compare the pre–policy change group vs the post–policy change group with appointment wait time (in days) as the dependent factor. The Levine test for equality of variance was significant; therefore, equal variances were not assumed. Participants in the pre–policy change group (n = 160) waited more days (mean [SE], 70.0 [3.46]) than those in the post–policy change group (n = 115; mean [SE], 30.2 [1.85]). This difference of 39.71 days was significant (BCa 95% CI, 32.02-47.30; t236.04 = 10.13; P < .001) and represented a large effect (d = 1.11) and a 55.2% reduction in patients’ time spent on the wait-list.
DISCUSSION
This study examined the effects of implementing a new policy requiring patients to complete a preappointment survey before their initial appointment was scheduled at a chronic pain clinic. The impact of this policy change on patient show rates and wait-list times was examined. Although the overall show rate improved from approximately 79% to 86% following policy change, this increase was not statistically significant, which may be attributable to a ceiling effect. Logistic regression analyses, which controlled for sex, race, and their interactions, demonstrated that Latino and Black patients had significantly lower odds of attending their appointments compared with White patients. These findings suggest that although the policy change may have contributed to an overall improvement in show rates, it did not equally benefit all demographic groups. Notably, there was a lower percentage of Latino and Black patients scheduled after the policy change, although this difference was not statistically significant. The regressions may have been influenced by the small sample sizes of Latino and Black patients, which limits the interpretation of these findings. Although it is unclear why fewer Latino and Black patients attended their appointments post policy change, these results highlight the need for providers to be aware of potential disparities and be flexible in their policies to ensure that policy changes do not disproportionately affect racial minority groups by inadvertently creating barriers to care.
The study also found that the policy changes significantly decreased the amount of time patients spent on the wait-list before being seen, which is a meaningful finding because access to care and decreased wait-list time are important metrics for providers and institutions. Previous research has indicated that long wait-lists can lead to missed appointments caused by conflicting obligations, forgetting the appointment date, or changes in a patient’s clinical status.9,10 By reducing wait times, these obstacles may be mitigated. Additionally, findings from the current study suggest that the policy change may have helped prevent ambivalent families, who were less likely to attend their appointments, from being scheduled. This may have resulted in fewer empty patient slots and reduced the wait-list for more motivated families. Future research is needed to determine specific reasons patients do not attend their appointments. Previous studies have identified common reasons for no-shows such as forgetfulness, sudden illness, work or school conflicts, transportation issues, no longer needing an appointment, or patients seeking services elsewhere.9,10 Providers should consider these factors when interpreting this study’s results or assessing
potential changes in their own practices.
Limitations
The study was conducted at a pediatric pain clinic and focused only on new patient appointments. The results may vary in other situations, such as different clinical disciplines or settings, or if applied to follow-up appointments. Although assessing the impact of a preappointment survey in other settings may help generalize the study’s findings, providers should use caution when applying this strategy. A mandatory preappointment survey would be inappropriate in emergencies or for patients requiring frequent follow-up appointments.
The results of this study should be interpreted with caution due to the unequal distributions of racial groups and sex within this sample. Patients attending pediatric chronic pain clinics, such as the clinic in this study, are most often White female adolescents. In this sample, the vast majority of patients were White (n = 218), whereas there were fewer than 30 patients represented in each of the other racial groups. This disparity may introduce bias in the regression analyses, potentially affecting the interpretation of race-related findings or masking important differences in show rates among smaller demographic groups. Future research should aim to explore the effects of policy change on specific racial groups, ensuring adequate sample sizes to minimize bias and accurately capture how they are affected.
The timing of this study should also be considered when interpreting results. The attendance rate during the study period (December 2020-December 2022) may have been affected by the COVID-19 pandemic. Families’ concerns about COVID-19 infection may have potentially affected their willingness to attend a multidisciplinary appointment. Although COVID-19 may act as a temporal confounder, the attendance rates in the first year of the current study were consistent with what we observed in a previous study7 conducted in this clinic before the COVID-19 pandemic. Furthermore, no families reported that attendance or survey completion was impacted by COVID-19–related concerns.
Additionally, the research team investigated why the total number of patients seen during the year after the policy change was significantly lower than the year before the policy change was implemented. It was discovered that in the year after the policy change, there were fewer overall new patient clinic slots available because of changes in provider availability. This information may reinforce the significance of our findings, as with fewer available appointments in the year after policy change, we should have experienced an increased wait-list time.
When contemplating policy change, providers must consider how inflexibility may jeopardize equity in health care. Some patients may be too busy to complete a survey, not have internet access to complete the survey, feel uncomfortable completing online questions, or have low interest in this task. In fact, these concerns may have prevented some patients from being scheduled in the current study, although we were unable to assess this supposition. If providers plan to require mandatory surveys, they need to first communicate with patients to determine barriers and be open to making exceptions so these concerns do not limit access to care. The current study could not determine factors associated with noncompletion of the required preappointment survey. Anecdotally, in this pain clinic, noncompletion of the survey sometimes reflects families’ lack of willingness to follow through with prolonged, difficult, or time-consuming treatment recommendations that are inherent to chronic pain management. However, further research is needed to identify other barriers that preclude the completion of a preappointment survey.
Implications
The study results show that requiring preappointment surveys can reduce wait times in a pediatric pain clinic, leading to better access to care and more efficient use of clinical resources. When possible, conducting a trial study can help determine whether similar methods or policy changes will yield comparable results in other clinical scenarios. Providers can then collaborate with their organization’s administration to consider requiring preappointment surveys before scheduling appointments in their clinics. Providers should be adaptable and make exceptions when implementing policies to ensure they do not create barriers to accessing care. For example, online surveys may pose challenges for individuals without internet access at home, non-English speakers, or individuals with disabilities such as blindness. Providers should work with these patients and their caregivers to ensure that the survey requirement does not hinder their access to medical care. Providers should also consider whether the survey questions could have legal implications. For instance, they may choose to exclude questions about suicide, self-harm, or illegal activities, or carefully consider how to handle responses to these questions to meet legal requirements and ethical obligations in their field.
CONCLUSIONS
Preappointment surveys can positively impact new patient appointment show rates and wait-list times, especially when required before scheduling. Providers may want to consider requiring preappointment surveys to capitalize on patients more likely to attend their appointments, thus improving show rates and wait-list times. If considering this policy change, providers should carefully consider how it may affect the populations they serve to ensure access to care is not limited. Even if a general policy change is made to require preappointment surveys, providers should consider exceptions and how to assist patients who cannot reasonably complete surveys. When discussing new policies with organizational administration, it is important to demonstrate data supporting the policy change and consider starting with a trial period to support these initiatives. Continued data monitoring and a willingness to adapt procedures are necessary to meet the ever-changing needs of patients.
Author Affiliations: Johns Hopkins All Children’s Hospital (WSF, EF, JTR, KH, GC), St Petersburg, FL.
Source of Funding: None.
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (WSF, KH, GC); acquisition of data (WSF, KH, GC); analysis and interpretation of data (WSF, EF, JTR, KH, GC); drafting of the manuscript (WSF, EF, GC); critical revision of the manuscript for important intellectual content (WSF, EF, JTR, KH, GC); statistical analysis (JTR); provision of study materials or patients (WSF, KH); administrative, technical, or logistic support (WSF, KH); and supervision (WSF).
Send Correspondence to: William S. Frye, PhD, BCB, ABPP, Johns Hopkins All Children’s Hospital, 880 6th St S, Ste 460, St Petersburg, FL 33701. Email: Wfrye1@jhmi.edu.
REFERENCES