Publication
Article
The American Journal of Managed Care
Author(s):
The authors developed and validated a survey instrument to assess primary care providers’ and pharmacists’ experiences, attitudes, and beliefs regarding medication discontinuation.
ABSTRACTObjectives: Primary care providers (PCPs) and clinical pharmacists have concerns about the adverse consequences of using medications inappropriately and generally support the notion of reducing unnecessary drugs. Despite this attitude, many factors impede clinicians’ ability to discontinue medication in clinical settings. We sought to develop a survey instrument that assesses PCPs' and pharmacists’ experiences, attitudes, and beliefs toward medication discontinuation.
Study Design: Survey development and psychometric assessment.
Methods: Based on a conceptual framework, we developed a questionnaire and surveyed a national sample of Department of Veterans Affairs PCPs with prescribing privileges, including physicians, nurse practitioners, physician assistants, and clinical pharmacy specialists. We randomly divided respondents into derivation and validation samples and used iterations of multi-trait analysis to assess the psychometric properties of the proposed measures. Multivariable regression models identified factors associated with the outcome of self-rated comfort with medication discontinuation.
Results: Using established criteria for scale development, we identified 5 scales: Medication Characteristics, Current Patient Clinical Factors, Predictions of Future Health States, Patients’ Resources to Manage Their Own Health, and Education and Experience. Three of these dimensions predicted providers’ self-rated comfort with making decisions to discontinue medication (Current Patient Clinical Factors, Predictions of Future Health States, and Education and Experience).
Conclusions: We developed a psychometrically sound instrument to measure prescribers’ attitudes toward, and experiences with, medication discontinuation. This survey will enable identification of perceived barriers to, and facilitators of, proactive discontinuation—an important step toward developing interventions that improve the quality and safety of care in medication use.
Am J Manag Care. 2016;22(11):747-754
Take-Away Points
Use of 5 or more medicines, often considered polypharmacy, is associated with adverse drug events (ADEs).1 ADEs, in turn, lead to increased healthcare utilization, costs, and morbidity.2,3 Approximately 40% of adults 65 years or older are exposed to polypharmacy, with similar prevalence in Department of Veterans Affairs (VA) patients.4,5 One approach to prevent polypharmacy-related ADEs is to reduce medications that are outdated, not indicated, or of limited benefit relative to risk.6
Discontinuation, also known as de-prescribing, has been defined as a “systematic process of identifying and discontinuing drugs in instances in which existing or potential harms outweigh existing or potential benefits within the context of an individual patient’s care goals, current level of functioning, life expectancy, values, and preferences. De-prescribing is part of the good prescribing continuum.”7 Although discontinuing a medication can be considered “doing less,” it often requires more provider effort, as patients may require more frequent visits and closer monitoring after de-prescribing. Moreover, discussions about medication discontinuation may take time during already busy clinical encounters, especially to ensure accurate communication between patients and clinicians.8
Prescribers’ voice concerns about the adverse consequences of inappropriate medication use and general support for the notion of discontinuing unnecessary drugs.9 Nonetheless, many factors impede clinicians’ ability to de-prescribe, including patient complexity, clinical uncertainty, and shared management with other healthcare providers, all of which can contribute to “clinical inertia” around medication discontinuation.10 Taking these findings in the context of the broader literature on de-implementation, we undertook the present study to develop and administer a survey to a national sample of VA primary care providers (PCPs) to characterize their experiences, attitudes, and beliefs toward medication discontinuation.
METHODS
Instrument Development
Based on our qualitative work with PCPs assessing their attitudes toward discontinuation and on the literature on medication prescribing during the past 30 years, we developed a conceptual model of factors that influence medication discontinuation decisions.9 The hierarchical model identified 4 larger, overarching domains: Medications, Patients, Providers, and System Factors. Within these broad domains, we distinguish 10 more specific dimensions (Table 1). Each dimension represents a construct, or abstract idea, that we sought to measure in the survey instrument.11 Medication includes 2 dimensions: “medication characteristics,” such as dosing frequency, and “medication uncertainty,” which includes medication reconciliation difficulties. Patients comprises 4 dimensions: “clinical status” includes the patient’s current health conditions, “desired role” reflects patient activation and shared decision making, “adherence” is taking medication as prescribed, and “patient knowledge and beliefs about medications.” Providers has 2 dimensions: “providers’ personal beliefs” reflects respondents’ views about medications; and “professional identity” encompasses responsibility, jurisdiction, and authority regarding medications. System Factors has 2 dimensions: “multiple providers” addresses complexities of care, and “workplace structure and process” refers to external factors.
Based on these 10 dimensions, we generated a pool of 75 items, ensuring that each dimension was represented by at least 3 items. An additional item asked providers to indicate their current overall comfort level with deciding to discontinue a medication on a 0 to 10 scale ranging from “not at all comfortable” to “completely comfortable.” To assess providers’ general attitudes regarding the use and efficacy of medications, we included the 4-item Beliefs about Medications Questionnaire (BMQ) overuse scale.12
We circulated the draft Provider Perceptions of Medication Discontinuation survey to a 7-member expert panel of researchers and PCPs, including experts in survey development and medication safety; all provided feedback on the draft items, which were revised accordingly. The updated draft was then presented to a non-VA academic research forum composed of 20 to 30 internists and researchers similar to the target population of survey respondents, where the items were reviewed and suggested improvements were again incorporated.
Next, we used a semi-structured cognitive interview protocol with specific probes in 1-on-1 sessions with VA PCPs.13 Using a modified form of retrospective debriefing, subjects completed the survey 1 section at a time and answered questions about how they interpreted the items and decided on a response.14 Participants included professionals representing the most likely future respondents: physicians, nurse practitioners, and clinical pharmacy specialists. The cognitive interviews were conducted in 2 rounds (8 and 4 clinicians, respectively). Between rounds, we modified items and instructions based on feedback with additional edits after the second round. The resulting pilot-ready version of the survey included 56 items related to medication discontinuation, organized around the 10 dimensions noted above, plus 8 demographic items. The substantive content of the pilot-ready survey is summarized in Table 1.
Pilot Study and Psychometric Evaluation
Sample. We surveyed VA PCPs with prescribing privileges. Our sampling frame was the Primary Care Management Module, a centralized VA database containing information for all PCPs. From this listing, we identified all providers nationwide with the title of physician—primary care, physician–attending, PCP, nurse practitioner (NP), or physician assistant (PA). Using another centralized database, we identified clinical pharmacy specialists by selecting “pharmacy service providers” who had primary care clinical encounters. We determined the sample size based on our primary objective to achieve adequate power (0.80) for the multi-trait analysis (MTA), assuming, based on our general experience with survey-based attitude measures, that item-scale correlations would be moderate (0.3-0.5). This yielded an estimated effect size for the difference between correlations in the moderate range (Jacob Cohen’s effect size index [q] = 0.30) and a target sample of 180 for both the derivation and validation samples. Assuming a response rate of 20%, we randomly selected an initial mail-out sample of 2500 providers from those eligible, we stratified evenly across 4 geographic regions, and oversampled NP/PAs and pharmacists to ensure adequate representation and to enable comparisons across the 3 provider types.
Survey administration. We sent each provider an e-mail introducing the survey objectives and containing a link to the survey website. If an e-mail was undeliverable, we selected a replacement subject of the same provider type and geographic stratum. Nonrespondents received up to 2 reminder e-mails at 1-week intervals. The survey remained open for 3 months. All responses were anonymous.
Analysis strategy. We applied MTA to evaluate the psychometric properties of the proposed scales. In MTA, scale reliability is assessed by Cronbach’s alpha coefficient. Item convergence was evaluated by examining the correlation of each item with its assigned scale (item-scale correlations), and item discrimination compared each item’s item-scale correlations with its correlations with all other scales.15 We randomly split respondents into derivation and validation groups and ran the initial MTA in the derivation sample. We made iterative modifications guided by both empirical findings and conceptual considerations, reassigning items to scales to improve the psychometric properties of the scales while also clarifying and focusing scale content. After arriving at a final model in the derivation sample, we tested it by repeating the MTA in the validation sample. To evaluate the proposed final questionnaire produced by the MTA, 4 expert-panel members reviewed the results for face validity via independent appraisal and group discussion. We assessed nonresponse bias by comparing respondents and nonrespondents on 4 factors available for all subjects from VA centralized databases: geographic region, job type, age, and gender. All analyses were conducted in SAS version 9.3 (SAS Institute Inc, Cary, North Carolina).
RESULTS
A total of 411 prescribers completed online questionnaires. After accounting for unreachable prescribers (n = 25), the response rate was 16.6%. Nonresponders were more likely to be physicians than NP/PAs or pharmacists, but were otherwise similar with respect to age, gender, and geographic region. Details regarding respondent demographics are in Table 2. Regarding data quality, the median percent of missing responses per item on the substantive questions was 11.7% (range = 0.01%-16%; n = 4-65); the median percent missing on the demographic questions was 17.2% (range = 16%-19%; n = 64-79).
The respondents were randomly divided into a derivation sample (n = 205) and validation sample (n = 206). We conducted a series of MTAs in the derivation sample, beginning with the hypothesized model of 10 scales. Upon review of the data, we felt 2 of the hypothesized scales (Indication Uncertainty and Multiple Providers) represented the frequency with which various events occurred rather than representing respondents’ beliefs. The items combine to describe and define a scale (ie, a formative measure) rather than reflect an underlying latent construct driving item responses (ie, a reflective measure).16,17 Formative measures will, by their nature, not necessarily exhibit high internal consistency, reliability, or item convergence and discrimination. Therefore, we omitted these questions from the MTA, but retained them as indices for use in future analyses.
Thus, we began the series of MTAs with 8 hypothesized dimensions. Consistent with the recommendations of Ware and colleagues, we included only the subset of 167 respondents (81%) who answered at least half of the items in all of the hypothesized scales (the half-scale criterion), estimating values for the unanswered items based on the within-subject average of the answered items in the scale.18 Item-scale correlations were corrected for overlap, and differences between 2 item-scale correlations were deemed significant if their 95% confidence intervals did not intersect. After each iteration, an item was moved if: 1) it was more highly correlated with a competing scale than with its current scale (ie, scaling failure), 2) the re-assignment was consistent with the content of the destination scale, and 3) its removal did not reduce the internal consistency reliability of the current scale below the minimum of 0.70 recommended for group comparisons.19 After 8 iterations, the original model evolved to a more parsimonious model with 5 scales: Medication Characteristics, Current Patient Clinical Factors, Predictions of Future Health States, Patients’ Resources to Manage Their Own Health, and Education and Experience. One of those scales—Medication Characteristics—consisted of only 2 items and demonstrated relatively low internal consistency reliability. However, there were no scaling failures, and thus, we tested the 5-scale model in the validation sample.
The pattern of convergent and discriminant correlations generated by the MTA in the validation sample demonstrated strong support for the 5-scale model (Table 3). Correlations of 0.40 or higher between an item and its overall scale score (adjusted for overlap) is indicative of adequate item convergence.18,20 In the confirmation sample, this criterion was met by all but 3 of the items. Additionally, items were significantly more highly correlated with their own scale than with other scales in 65 of 76 comparisons (86%), and higher—albeit not significantly—in another 9 comparisons. Thus, appropriate item discrimination was seen for 99% of item-to-scale correlations. Scale internal consistency reliabilities were adequate, ranging from 0.75 to 0.82, with the exception of the 2-item Medication Characteristics scale (Cronbach’s alpha of 0.59 in the derivation sample and 0.33 in the validation sample).
Basic descriptive statistics for the 5 scales are reported in Table 4.
Scale scores computed as the mean across relevant items—all on 1-to-5 response scales—ranged from 2.7 (Medication Characteristics) to 4.2 (Education and Experience). The percentage of respondents with scores at the floor ranged from 0% (Current Patient Clinical and Education and Experience) to 2% (Medication Characteristics), whereas the range of responses at the ceiling ranged from 1% (Current Patient Clinical) to 20% (Education and Experience).
Table 5 reports the correlations between scales in the off-diagonal entries and the scale internal consistency reliability estimates in the diagonal entries. These results are consistent with those expected of a set of measures of related, yet distinguishable, factors that influence medication discontinuation decisions. Specifically, the inter-scale correlations range from 0.03 to 0.44 (median = 0.14). With the exception of Medication Characteristics, scale alpha coefficients were substantially higher than the scale-to-scale correlations, further supporting discrimination between the scales; if the scale reliabilities did not exceed the inter-scale correlations, the scales would be interchangeable (ie, not measures of unique and distinguishable factors).
Finally, to evaluate construct validity, we conducted a multiple linear regression analysis using the 5 scales as predictors of self-rated comfort with medication discontinuation decisions. The mean response for comfort was 7.5 (standard deviation = 1.8), with generally normal distribution (skew, —0.47; kurtosis, –0.52), supporting its use as a linear outcome variable.21 Other included predictors were 8 provider demographics, 9 survey items regarding patient medication-taking behaviors, medication reconciliation, side effects, workplace support for monitoring patients, prior discontinuation experience, and the BMQ overuse scale. The model was built using forward selection; given the exploratory nature of the exercise, a P value of .10 was used for entry into the model. The model overall was significant (P <.0001) and explained 27.6% of the variance in prescriber comfort with de-prescribing. Three of the new attitude scales were significant predictors (standardized beta estimates): Current Patient Clinical Factors (0.17; P = .005), Predictions of Future Health States (—0.11; P = .005), and Education and Experience (0.41; P <.0001). The zero-order correlations between these scales and self-rated comfort were 0.16, 0.01, and 0.37, respectively. Additional significant predictors included age, race, physician provider, region, and prior discontinuation experience.
DISCUSSION
We developed and administered a survey instrument designed to characterize PCPs’ and clinical pharmacists’ experiences, attitudes, and beliefs regarding medication discontinuation. To our knowledge, this is the first instrument of its kind, and it should enable better understanding of the facilitators of and barriers to de-prescribing in a general ambulatory population—an area in need of further study.
To examine the instrument’s psychometric properties, we applied MTA to responses from more than 400 VA primary care prescribers. We identified 5 scales related to constructs of clinical de-prescribing decisions: Medication Characteristics, Current Patient Clinical Factors, Predictions of Future Health States, Patients’ Resources to Manage Their Own Health, and Education and Experience. These scales were similar, but not identical, to the dimensions of our a priori conceptual model. Where the 2 differed, the empirically derived scales exhibited logical face-value relationships. For example, parsing the hypothesized dimension “Clinical Status” into Current Patient Clinical Factors and Predictions of Future Health is reasonable inasmuch as the latter may involve decreased certainty, and thus, influence decision making differently than the former.
Our analyses demonstrated that 3 of the 5 final scales predicted prescribers’ self-rated comfort with medication discontinuation decisions. We observed an apparent example of classical suppression with regard to Predictions of Future Health States, given that the simple correlation was essentially 0; however, this variable had a negative coefficient in the regression model.22 The scale items are phrased such that a higher score indicates a higher likelihood of recommending discontinuation even though such action might be associated with patient concerns about symptom return or other negative consequences. The negative coefficient in the regression model suggests that, after controlling for other factors, endorsing discontinuation even if it may lead to worse health outcomes is associated with lower levels of comfort with discontinuation. Finally, Current Patient Clinical Factors and Education and Experience were both significant positive predictors of providers’ comfort with discontinuation.
Altogether, Current Patient Clinical Factors, Predictions of Future Health States, and Education and Experience, along with other factors, accounted for 27.6% of the variance in provider comfort. Provider comfort, in turn, likely influenced the ability and willingness of clinicians to address discontinuation. As VA implements the Patient Aligned Care Team (PACT) model of the patient-centered medical home, more types of clinicians will encounter potential discontinuation opportunities. Building on more than a decade of increasing responsibilities within VA, clinical pharmacists play a significant role in PACT.23 Although pharmacists do not have the jurisdiction to make diagnoses, they can initiate, titrate, and discontinue medications in the management of chronic conditions. Additionally, NPs and PAs are gaining increased autonomy, often operating on par with physicians, supporting the need for better understanding of their comfort and willingness to de-prescribe. Therefore, using this survey instrument to assess the attitudes, beliefs, and comfort of various care clinicians may identify those prescribers who need additional training in proactive medication discontinuation.
Limitations
Several limitations should be noted. First, the response rate was low. Multiple factors may have contributed to this low response, including clinicians’ busy schedules and the ability to only access the survey from behind the organizational firewall. Clinicians have lower response rates than the general population,24,25 and there was no incentive provided. However, for the primary purpose of scale development, rather than seeking to characterize clinicians’ attitudes through a representative sample, the sample size provided adequate power. Future studies that provide an incentive, use a shortened survey, or have enhanced follow-up may yield higher response rates.26 Second, the survey was developed in VA, and there may be differences in the provider characteristics themselves (ie, those who choose to practice within VA may differ from those who do not) or in the practice culture influencing them. Future research should evaluate the instrument and confirm its psychometric properties outside VA.
For individual items, the missing response rate was as high as 16% for some of the substantive questions and 19% for demographic questions. Many of these items were near the end of the survey, suggesting a role for response fatigue. Eliminating unnecessary items and shortening the survey could improve response rates and reduce missing data. The MTA analysis supported only a 2-item Medications Characteristics scale, which consequently demonstrated more limited reliability. Creating and testing additional items on this topic should strengthen the scale’s psychometric properties. When considered along with the face validity of the items, the pattern of item/scale relationships reported here preliminarily supports the proposed interpretation of the new scales.18,27-30 However, as this is based only on the analysis of data from the proposed measures themselves, such interpretation is tentative. We do not know from these data whether the scales measure what they purport to measure; only that the items assigned to a given scale are probably related to the same construct, and that proposed scales appear to measure different constructs.
Additional testing is needed to more definitively establish the meaning of these measures, including criterion and construct validation. Finally, 60% of respondents scored at the ceiling of the Education and Experience scale, potentially limiting that scale’s ability to discriminate between subgroups or detect change over time. This scale could benefit from additional items regarding the details of training or experience that contribute in a less uniform manner to clinicians’ comfort with de-prescribing.
CONCLUSIONS
Using separate derivation and validation samples, we created replicable and psychometrically sound scales representing key dimensions that contribute to primary care prescribers’ medication discontinuation decisions. Prior research on overuse in healthcare has tended to focus on practices related to screening and diagnostic testing.31 It is also thought that de-implementation, of which medication discontinuation is a particular example, is qualitatively different than preventing inappropriate initiation, and is associated with its own difficulties.32,33 For example, loss of clinical information between healthcare providers and systems may make it more difficult to obtain an accurate medication list; without these data, clinicians will have less certainty about de-prescribing. Therefore, using this survey instrument in future evaluations to understand how clinicians who work in different care settings perceive and incorporate de-prescribing into their practice can identify provider characteristics or specific decision-making factors associated with reluctance to discontinue. Those findings can then be leveraged to design future interventions that improve the quality and safety of prescribing via reduction of inappropriate medication use.
Author Affiliations: Center for Healthcare Organization and Implementation Research (AL, SRS, KS) and Section of General Internal Medicine (AL, SRS), VA Boston Healthcare System, Boston MA; Edith Nourse Rogers Memorial Veterans Affairs Medical Center (BGB), Bedford, MA; Section of General Internal Medicine, Boston Medical Center (AL, SRS), Boston, MA; Health Law, Policy and Management, Boston University School of Public Health (BGB, MM), Boston MA; Performance Measurement, VHA Office of Analytics and Business Intelligence (MM), Bedford, MA.
Source of Funding: The principal investigator (AL) was supported by a Department of Veterans Affairs (VA), Veterans Health Administration, Health Services Research and Development Career Development Award (CDA12-166), and the study was conducted using resources of the VA Boston Healthcare System. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs. The funding organization had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; nor the decision to submit the manuscript for publication.
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (BGB, AL, MM, SRS); acquisition of data (AL); analysis and interpretation of data (BGB, AL, MM, KS, SRS); drafting of the manuscript (AL); critical revision of the manuscript for important intellectual content (BGB, AL, MM, KS, SRS); statistical analysis (AL, MM, KS); provision of patients or study materials (AL); obtaining funding (AL); administrative, technical, or logistic support (AL, MM); and supervision (SRS).
Address Correspondence to: Amy Linsky, MD, MSc, General Internal Medicine (152G), VA Boston Healthcare System, 150 S. Huntington Ave, Boston, MA 02130. E-mail: amy.linsky@va.gov.
REFERENCES
1. Bushardt RL, Massey EB, Simpson TW, Ariail JC, Simpson KN. Polypharmacy: misleading, but manageable. Clin Interv Aging. 2008;3(2):383-389.
2. Nebeker JR, Barach P, Samore MH. Clarifying adverse drug events: a clinician’s guide to terminology, documentation, and reporting. Ann Intern Med. 2004;140(10):795-801.
3. Gandhi TK, Weingart SN, Borus J, et al. Adverse drug events in ambulatory care. N Engl J Med. 2003;348(16):1556-1564.
4. Gurwitz JH. Polypharmacy: a new paradigm for quality drug therapy in the elderly? Arch Intern Med. 2004;164(18):1957-1959.
5. Preskorn SH, Silkey B, Shah R, et al. Complexity of medication use in the Veterans Affairs healthcare system: part I: outpatient use in relation to age and number of prescribers. J Psychiatr Pract. 2005;11(1):5-15.
6. Bain KT, Holmes HM, Beers MH, Maio V, Handler SM, Pauker SG. Discontinuing medications: a novel approach for revising the prescribing stage of the medication-use process. J Am Geriatr Soc. 2008;56(10):1946-1952. doi: 10.1111/j.1532-5415.2008.01916.x.
7. Scott IA, Hilmer SN, Reeve E, et al. Reducing inappropriate polypharmacy: the process of deprescribing. JAMA Intern Med. 2015;175(5):827-834. doi: 10.1001/jamainternmed.2015.0324.
8. Straand J, Sandvik H. Stopping long-term drug therapy in general practice. how well do physicians and patients agree? Fam Pract. 2001;18(6):597-601.
9. Linsky A, Simon SR, Marcello TB, Bokhour B. Clinical provider perceptions of proactive medication discontinuation. Am J Manag Care. 2015;21(4):277-283.
10. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834.
11. Lavraka PJ, ed. Encyclopedia of Survey Research Methods. Thousand Oaks, CA: Sage Publications, Inc; 2008.
12. Horne R, Weinman J, Hankins M. The beliefs about medicines questionnaire: the development and evaluation of a new method for assessing the cognitive representation of medication. Psychol Health. 1999;14(1):1-24.
13. Willis GB. Cognitive interviewing: a “how to” guide. Paper presented at: Meeting of the American Statistical Association; 1999. http://appliedresearch.cancer.gov/archive/cognitive/interview.pdf. Accessed October 5, 2016.
14. Dillman DA. Mail and Internet Surveys: The Tailored Design Method. New York, NY: John Wiley & Sons; 2000.
15. Hays RD, Hayashi T. Beyond internal consistency reliability: rationale and user’s guide for multitrait scaling analysis program on the microcomputer. Behav Res Methods Instrum Comput. 1990;22(2):167-175.
16. Bollen KA, Bauldry S. Three Cs in measurement models: causal indicators, composite indicators, and covariates. Psychol Methods. 2011;16(3):265-284. doi: 10.1037/a0024448.
17. Shwartz M, Ash AS. Composite measures: matching the method to the purpose. National Quality Measures Clearinghouse website. https://www.qualitymeasures.ahrq.gov/expert/expert-commentary/16464. Published November 3, 2008. Accessed June 29, 2015.
18. Ware JE Jr, Harris WJ, Gandek B, Rogers BW, Reese PR. MAP-R for Windows: Multitrait/Multi-item Analysis Program—Revised User’s Guide. Boston, MA: Health Assessment Lab; 1997.
19. Nunnally JC, Bernstein IH. Psychometric Theory. 3rd ed. New York, NY: McGraw-Hill, Inc; 1994.
20. Kerlinger FN. Foundations of Behavioral Research. 2nd ed. New York, NY: Holt, Rinehart and Winston Inc; 1973.
21. Meyers LS, Gamst G, Guarino AJ. Applied Multivariate Research: Design and Interpretation. Thousand Oaks, CA: Sage Publications, Inc; 2006.
22. Cohen J, Cohen P, West SG, Aiken LS. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. 3rd edition. Mahwah, NJ: Lawrence Earlbaum Associates; 2003.
23. Manolakis PG, Skelton JB. Pharmacists’ contributions to primary care in the United States collaborating to address unmet patient care needs: the emerging role for pharmacists to address the shortage of primary care providers. Am J Pharm Educ. 2010;74(10):S7.
24. Sudman S. Mail surveys of reluctant professionals. Eval Rev. 1985;9(3):349-360.
25. Cummings SM, Savitz LA, Konrad TR. Reported response rates to mailed physician questionnaires. Health Serv Res. 2001;35(6):1347-1355.
26. VanGeest JB, Johnson TP, Welch VL. Methodologies for improving response rates in surveys of physicians: a systematic review. Eval Health Prof. 2007;30(4):303-321.
27. Brown TA. Confirmatory Factor Analysis for Applied Research. New York, NY: Guilford Press; 2006.
28. Dong XF, Liu YJ, Wang AX, Lv PH. Psychometric properties of the Chinese version of the Self-Efficacy for Appropriate Medication Use Scale in patients with stroke. Patient Prefer Adherence. 2016;10:321-327. doi: 10.2147/PPA.S101844.
29. Radwin LE, Washko M, Suchy KA, Tyman K. Development and pilot testing of four desired health outcomes scales. Oncol Nurs Forum. 2005;32(1):92-96.
30. Cleanthous S, Isenberg DA, Newman SP, Cano SJ. Patient Uncertainty Questionnaire-Rheumatology (PUQ-R): development and validation of a new patient-reported outcome instrument for systemic lupus erythematosus (SLE) and rheumatoid arthritis (RA) in a mixed methods study. Health Qual Life Outcomes. 2016;14:33. doi: 10.1186/s12955-016-0432-8.
31. Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. doi: 10.1001/jama.2012.476.
32. Roman BR, Asch DA. Faded promises: the challenge of deadopting low-value care. Ann Intern Med. 2014;161(2):149-150. doi: 10.7326/M14-0212.
33. Davidoff F. On the undiffusion of established practices. JAMA Intern Med. 2015;175(5):809-811. doi: 10.1001/jamainternmed.2015.0167.
Higher Life’s Essential 8 Scores Associated With Reduced COPD Risk