Publication

Article

The American Journal of Managed Care

July 2011
Volume17
Issue 7

Obtaining Patient Feedback at Point of Service Using Electronic Kiosks

Electronic kiosks were used to survey patients on their experience with care at the time of care delivery in an urban primary care practice.

Background:

Engaging patients in their healthcare is a goal of healthcare reform. Obtaining sufficient, reliable patient feedback about their experiences in an office encounter has been a challenge.

Objective:

To determine the feasibility of collecting feedback from patients regarding their office encounter at the point of care using touch screen kiosk technology in an urban primary care clinic.

Methods:

We analyzed response rate, ease of use, provider data, and condition-specific data. The study was conducted over a 45-day period at 1 internal medicine academic teaching practice. Providers, staff, and a sponsor-supported monitor directed patients to use the kiosk after an office visit.

Results:

A total of 1923 surveys were completed from 3850 office visits (50%). There was no appreciable impact on office flow in terms of wait time, checkout procedures, or visit with provider. Characteristics of patients completing the surveys were similar to practice demographics of patients with an office visit during the study period in terms of sex, but differed by age and race. Small but statistically significant differences were seen among patient ratings of resident versus attending physicians. Patients with depression were less likely than patients with diabetes, chronic low back pain, or asthma to report that they had set personal goals to manage their condition.

Conclusion:

This technology represents an important advance in our ability to capture the patient’s opinion regarding quality and practice improvement initiatives, and has the potential for directly engaging patients in their care.

(Am J Manag Care. 2011;17(7):e270-e276)

Electronic kiosks, which enable data to be collected at the point of care, allow patients to report on specific elements of their care visit with the provider.

  • This method of surveying patients provides a much larger data set than traditional paper-and-pencil or mailed surveys, allowing for provider-level analysis.

  • Electronic kiosk surveys are simple to implement into clinic flow and require little additional staff resources.

Goals of healthcare reform include improving the quality of patient care and reducing costs. How improvement is measured must include consideration of the actual process and outcome of care from the patient’s perspective.1-3 For effective chronic disease management, patients should leave a visit with their care provider having a clear understanding of their diagnosis, goals, and treatment plan.4-8 Traditional methods of evaluating a visit from the patient’s side that focused on satisfaction with care through postcare surveys and interviews have been limited by a number of factors, including low response rate, recall bias related to delay in obtaining information, selection bias related to which patients actually complete a survey, and the halo effect, which reflects patients’ desires to speak highly of their providers.9-12 In addition, the logistics of collecting this information may be complex, requiring staff time to distribute surveys and collect and analyze data.

New information technology greatly expands the ability of all stakeholders (patients, providers, practices, systems, and payers) to obtain patient feedback at the point of care, with the potential to link to the patient’s electronic health record to track the relationship between provision of care (what medications were prescribed, what tests were ordered), outcomes, and patient understanding of what happened during the visit and their role after the visit. These data can be used to evaluate, manage, and potentially reward systems, practices, patients, and providers. Electronic kiosks offer instant data access, feedback capabilities, and generation of patient-specific health information,13 and are increasingly being used within ambulatory practices to collect data at the point of care delivery in a time-, cost-, and space-efficient manner.14,15

We hypothesized that structuring and standardizing a process that gives patients the opportunity to provide immediate, anonymous electronic feedback using health information technologies integrated into the daily practice of medicine would lead to higher response rates than the traditional paper and pencil surveys being utilized by the practice, while allowing the clinician and practice to receive more detailed, provider-level information about the patient’s experience. We also sought to determine whether this point-of-care system can be used to assess patients’ understanding of their care plan.

METHODS

The study was conducted over a 45-day period at 1 inter-nal medicine academic teaching practice of the University of Pennsylvania. The practice consists of 9 faculty, 2 nurse practitioners, and 48 categorical residents, and conducts an average of 25,000 patient visits per year. The patient mix of the practice includes a range of socioeconomic status and insurance types.

Clinicians and staff were instructed to direct all patients to the kiosk immediately after the office visit to voluntarily complete an anonymous survey. Clinicians were also asked to review the primary reason for visit (acute visit, checkup or screening, a specific condition such as diabetes) with patients and were given a set of “visit confirmation” notepads to assist with this process.

We used the CarePartners Plus® touch screen Well™ Patient-Interactive Healthcare Management System (P-IHMS) to generate surveys and collect data. Two kiosks were installed in close proximity to the checkout desks where patients routinely stop after their visit.

The study consisted of 2 phases. During the first phase a sponsor-supported monitor directed patients to use the kiosk and was available to answer questions and respond to technical issues. The monitor kept an observation log of all patients seen exiting the practice through the checkout hallway (an additional exit exists in the clinic, and unobserved patients could have exited the clinic via that route). Each observation was individually indexed and included details regarding whether the patient had been directed to the kiosk by the practitioners, staff, or monitor, and whether the patient successfully completed the survey. During the second phase, the monitor did not actively direct patients toward the kiosks but continued to record observations of use. Providers and practice staff actively directed patients to use the kiosk during both phases of the study. Phase 1 lasted for 37 days and phase 2 lasted for 8 days.

Kiosk Administration of Survey

Patients were initially presented with a welcome and consent screen informing them that participation was voluntary, that responses to the survey were anonymous and for research purposes only, and that their responses would not have any impact on the care provided. Permission to continue was requested. Those who did not wish to continue could select "no” and end the survey.

Participants were then asked to provide demographic information (sex, race, age) and to select the primary reason for the visit from the options in the medical conditions list. On the following screen, they were asked to select the clinician who treated them by photograph and name.

All participants were then asked 5 questions (the Patient Experience Module) based on the National Committee for Quality Assurance’s Healthcare Effectiveness Data and Information Set & Quality Measurement standards. These questions concerned their experience with their provider and the visit. The next set of questions was based on the reason for visit selection (Disease Management Module). If the patient selected checkup, screening, acute care, or other as the reason for the visit, they were asked about the preventive medicine and patient education services provided. For those that selected a specific disease state, questions were asked about whether the care provider treated the patient using best practice guidelines for disease management of the condition, such as performing a foot exam for a diabetic patient.

The next set of questions, presented to all participants, was related to medication safety (Medication Module). Patients were asked to indicate whether the provider had prescribed written educational materials and whether they had received any medication prescriptions. They were also asked whether the medication they received was accompanied by an explanation from their care provider on proper use and what to do in the case of an adverse reaction.

Analysis

We compared the demographics of patients completing surveys with the demographics of the overall patient population to assess representativeness using X2 tests. Response rates were compared for directed versus undirected kiosk use and for phase 1 (monitor and practice directed) and phase 2 (only practice directed) using z tests of proportions.

We assessed feasibility using observational data on office flow, ease of kiosk use for participants, and effort required of staff and providers. Additionally, survey content analysis included provider-level ratings of experience of care, patient understanding of provider explanations of medications prescribed, and goal setting for self-management. Patient experience data were used to create a score for the provider (attending physicians and resident physicians). Each “yes” response to the Patient Perception Module questions was assigned a value of 1, and all “no” and “not sure” responses were assigned a null value. The average survey score for each provider type was then determined based on a scale of 1 to 5, and the average scores for resident and attending provider type were compared. Descriptive statistics for patient-level feedback on the Medication Module questions and personal goal setting by condition from the Disease State Module questions are presented as well. This study was granted approval by the University of Pennsylvania Institutional Review Board.

RESULTS

Sample Validity

The total number of completed surveys used in analysis was 1923. Four completed surveys where the respondent indicated an age of 0 to 17 years were removed from analysis because the practice does not provide care to patients younger than 18 years. Among patients who approached the kiosk and began the survey, 16.5% declined to go past the consent screen.

Table 1

compares respondents with all unique patients who had office visits during the period of kiosk activity. There were small but statistically significant variations in the racial/ethnic and age mixes, with fewer people older than 65 years completing the survey. More Asian and Hispanic patients engaged in the survey than expected, while fewer African American and Caucasian patients participated. Sex of survey participants mirrored that of the patient mix that presented to the practice for office visits during the study period.

Response Rate

Figure

The shows combined data for both phases of the study. Overall, a total of 1923 surveys were completed (50% response rate). For phase 1 the overall response rate was 55% (1766/3219), and for phase 2 it was 26% (161/631). There were 3850 office visits during the study period, and 3141 patients were observed exiting the clinic and passing by the kiosks (709 patients exited without passing by the kiosks or without being observed). Of the patients who were directed to use the system, there was no difference in response rate between phase 1 and phase 2 (64% vs 67%, z = 0.768, P = .22, data not shown on the Figure). Of the 370 patients who were not directed to use the kiosk by a provider, staff, or monitor, 140 completed surveys (38%). Overall, patients were more likely to use the kiosks when directed, and direction from practice staff and providers was more likely to result in survey completion than when direction came from the sponsor monitor (z = 8.612, P <.001).

Impact of Kiosk Use on Office Flow

Data obtained by the monitor indicated that the average time to complete a survey was 3 minutes per participant. Informal interviews with checkout staff and providers revealed there was no appreciable impact on office flow in terms of wait time to check in or check out. Provider and staff effort in directing patients was minimal. No complaints were received from patients, providers, or staff regarding impact of kiosk project on office flow. During phase 2, when no technical support was available, staff received few questions about use.

Care Visit Content Analysis

Table 2

Patient Experience Ratings by Provider Type. One potential advantage of the kiosk system is the ability to obtain sufficient data to distinguish between performance of individual providers or subgroups of providers. Data were therefore analyzed to determine whether any differences in patient experience could be detected between resident and attending physicians. Results are displayed in and demonstrate that patients were more satisfied with care delivered by attending physicians compared with less experienced resident physicians (P <.001), though the effect size for this difference was small (Cohen’s d = 0.21). The average number of completed patient surveys was 93.7 per attending physician and 15.2 per resident.

Medication Information Module. Questions regarding medication prescribing were asked of all participants who completed a survey. Less than half (43.9%) of patients reported receiving educational material from their provider. We found high levels of patient-reported understanding of explanations from providers on how to use medications (99.9%).

Table 3

Disease Management Module. Patients were asked to answer specific questions relating to the reason for their visit. More than three-fourths of patients (77%) chose checkup/screening or acute care visit as their reason for visit, and 23% chose a specific condition. Patients who chose a specific condition were asked whether they set personal goals with their provider to manage that condition (). Although the number of respondents was low, patients being treated for diabetes (P = .002), chronic low back pain (P = .01), and asthma (P = .03) were more likely to report a positive answer than patients with depression. Almost all of the participants (99.5% [1872/1882]) said they would stick to the care plan they agreed to.

DISCUSSION

Patient experience of care has been found to be an independent predictor of quality of care, adding information over and above other process measures.16 Point-of-care surveys provide insights into the actual encounter as well. These insights may help predict patient engagement and future adherence with recommended regimens. Providers generally established goals with patients who presented for chronic care visits, but were less likely to do so if the patient’s condition was depression. This finding is consistent with research suggesting the need to improve primary care provider identification and management of depression17-19 and highlights the type of quality indicators that point-of care electronic data capture allows. Educational materials were not provided in the majority of the visits. Patients did, however, feel that their questions regarding new medications were answered. Future studies can explore the use of this technology to identify opportunities for practice and provider improvement, and track patient responses to improvement initiatives as well as patient adherence to established care plans.

The overall response rate of 50% (1923/3850) is a dramatic improvement over current response rates (around 19% per quarter for mailed surveys from this practice) and is more likely to generate a reliable measurement of patient perception.20 The fact that differences were detected between level of provider suggests that this tool might be useful in evaluating provider performance. Increasingly, patient experience has been included in the evaluation of providers through specialty certification and maintenance of certification programs, trainees through the Accreditation Council for Graduate Medical Education residency review requirements,21 and health systems and practices through the National Committee for Quality Assurance patient-centered medical home recognition programs.22,23 Inclusion of patient experience data in pay-for-performance measures rewards providers and practices that are patient centered.

Characteristics of patients completing the surveys were generally similar to the overall practice demographics, implying that there is little selection bias among those willing to take an electronic survey. Patients of all ages and socioeconomic backgrounds appeared comfortable with the technology. Other studies testing the feasibility of electronic survey methods have found similarly high rates of acceptance and preference for the method over paper-based surveys, as well as less data error.24-26 The 64% response rate seen when patients were directed to the kiosk, even with minimal prompting, represents a much larger proportion of the patient population than has previously been reached for this practice. Of note, the rate of survey completion was highest when a practice staff member or provider directed the patient to complete the survey.

This study had a number of limitations. Disease-specific data were limited by a lack of patient identification of disease state as the reason for the visit. Only 23% of patients identified a disease condition as the primary reason for their visit, limiting the additional data on best practices for each condition that could have been collected. The reason for this lack of data is likely 2-fold; not all providers were adherent to use of the “visit confirmation” notepads where a primary reason for visit was to be identified and patients’ perception of the reason for their visit might have differed from that of their provider.

An externally provided dedicated monitor was present during both phases of the study to direct patients toward kiosks at checkout in phase 1 and to collect usage data in both phases. While a monitor is unlikely to be feasible in primary care practices, the phase 2 response rate of 26% (161/631) of all patients visiting the practice still resulted in a robust sampling for the short time period (8 days). In addition, patients’ response to direction by staff (67% survey completion when directed by practice staff) suggests that, with training, a reasonable sampling of patients can be obtained with minimal commitment of staff time. Response bias must be considered due to the location of the survey kiosk as well as the monitor and staff presence, both of which may have contributed to distorted responses reflecting social desirability,27,28 leniency bias,29,30 or acquiescence, particularly in rating of patient providers. In addition the presence of inaccurate data (patients reporting their age as younger than 18 years, patients reporting having received glaucoma screening when this procedure is not done at the practice) is problematic and raises questions regarding patient comprehension of survey questions that should be addressed. Nonresponse bias in this study is harder to determine due to the anonymity of the participants.31 While we can account for demographic characteristics of all the patients who visited the clinic during the study time frame, we do not know who among them participated in the survey. Future exploration of these 2 potential sources of bias is needed.

While patients generally were comfortable using the kiosk technology and required little prompting or help once they had engaged in the survey, this technology obviously excludes those with no or low literacy, the blind and vision impaired, and persons with other impairments that may make it difficult to physically engage in kiosk use. It is possible that older patients are less comfortable with the technology, a finding seen in previous research.32,33 The differences seen in race could reflect actual differences in who chose to engage in the kiosk survey but may also reflect inaccuracies in the electronic medical record race and ethnicity data, as this information is not always self-reported. While the differences in race for patients who engaged in the survey were statistically significant, it is unclear whether they represent a clinically significant difference between groups and likelihood of survey engagement. A meta-analysis of nearly 70,000 patients found no statistically significant difference among minority populations’ willingness to engage in health research compared with whites.34 Despite such findings, further study into the differences between those who do and do not complete electronic surveys within their healthcare provider’s office is warranted, given that race, ethnicity, culture, illness, the nature of the visit (difficult procedures, new diagnoses), concerns over privacy, and trust may all play a role and make the 2 populations demonstrably different.35,36 Among patients who started the survey, 16.5% did not complete it, and some obvious inaccuracies in entry (eg, age of patient) suggest the need to assess computer literacy and attention to inadvertent reinforcement of the digital divide.37,38

The reliability and reproducibility of patient responses over time and the relationship of these responses to clinical outcomes need to be established in further studies. Of note, the kiosks evaluated only 1 aspect of the care experience, the interaction with the provider, and did not capture the full range of the patient experience, from scheduling to check in and check out, through to billing. In addition, the ability to obtain instant data must be handled with respect for patient confidentiality, and access to and reporting of data need to be carefully monitored.

CONCLUSIONS

Healthcare reform legislation has triggered rapid adoption of clinical healthcare technologies, bringing with it the opportunity to introduce clinical tools that also empower the most central stakeholder in the healthcare system—the patient. As primary care practices move toward adoption of the medical home model and focus on improving chronic disease management, patient feedback data will be valuable in evaluating patient perceptions of, and engagement in, care. This pilot suggests that a standardized, systematic process using health information technologies within the existing physician office infrastructure can provide valuable feedback on the care experience. This technology represents an important advance in our ability to capture the patient’s opinion regarding quality and practice improvement initiatives, and perhaps more importantly, it has the potential for directly engaging patients in their care and ultimately improving outcomes.

Acknowledgments

The authors gratefully acknowledge Andrew Laign, MBA, for his help in collecting observational data on kiosk use and Mirar Bristol-Demeter, MA, for statistical consultation. Aron Starosta, PhD, Judy Shea, PhD, and Anje van Berckelear, MD, provided valuable comments on prior versions of this manuscript.

Author Affiliations: From Division of General Internal Medicine (DND, SCD), University of Pennsylvania, Philadelphia.

Funding Source: This research was not supported by grant funding; however, the kiosk hardware and software platform were provided free of charge by CarePartners Plus, Horsham, PA.

Author Disclosures: The authors (DND, SCD) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (DND, SCD); acquisition of data (DND, SCD); analysis and interpretation of data (DND, SCD); draft ing of the manuscript (DND, SCD); critical revision of the manuscript for important intellectual content (DND, SCD); statistical analysis (DND); obtaining funding (SCD); administrative, technical, or logistic support (DND, SCD); and supervision (SCD).

Address correspondence to: Danae N. DiRocco, MPH, Division of General Internal Medicine, University of Pennsylvania, 3701 Market St, Ste 741, Philadelphia, PA 19104. E-mail: danae.dirocco@uphs.upenn.edu.

1. Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.

2. Berwick DM. Measuring physicians’ quality and performance: adrift on lake Wobegon. JAMA. 2009;302(22):2485-2486.

3. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213-220.

4. Teutsch C. Patient-doctor communication. Med Clin N Am. 2003;87(5):1115-1145.

5. Lang F, Floyd MR, Beine KL, Buck P. Sequenced questioning to elicit the patient’s perspective on illness: effects on information disclosure, patient satisfaction, and time expenditure. Fam Med. 2002;34(5):325-330.

6. Kaplan SH, Greenfield S, Gandek B, Rogers WH, Ware JE Jr. Characteristics of physicians with participatory decision-making styles. Ann Intern Med. 1996;124(5):497-504.

7. Kaplan SH, Gandek B, Greenfield S, Rogers W, Ware JE. Patient and visit characteristics related to physicians’ participatory decision-making style: results from the Medical Outcomes Study. Med Care. 1995; 33(12):1176-1187.

8. Keating NL, Green DC, Kao AC, Gazmararian JA, Wu VY, Cleary PD. How are patients’ specific ambulatory care experiences related to trust, satisfaction, and considering changing physicians. J Gen Intern Med. 2002;17(1):29-39.

9. Bieber C, Müller KG, Nicolai J, Hartmann M, Eich W. How does your doctor talk with you? preliminary validation of a brief patient selfreport questionnaire on the quality of physician-patient interaction. J Clin Psychol Med Settings. 2010;17(2):125-136.

10. Garratt AM, Bjaertnes ØA, Krogstad U, Gulbrandsen P. The OutPatient Experiences Questionnaire (OPEQ): data quality, reliability, and validity in patients attending 52 Norwegian hospitals. Qual Saf Health Care. 2005;14(6):433-437.

11. Ross CK, Steward CA, Sinacore JM. A comparative study of seven measures of patient satisfaction. Med Care. 1995;33(4):392-406.

12. Zandbelt LC, Smets EMA, Oort FJ, Godfried MH, de Haes HC. Satisfaction with the outpatient encounter: a comparison of patients’ and physicians’ views. J Gen Intern Med. 2004;19(11):1088-1095.

13. Marceau LD, Link CL, Smith LD, Carolan SJ, Jamison RN. In-clinic use of electronic pain diaries: barriers of implementation among pain physicians. J Pain Symptom Manage. 2010;40(3):391-404.

14. Lobach DF, Silvey GM, Willis JM, et al. Coupling direct collection of health risk information from patients through kiosks with decision support for proactive care management. AMIA Annu Symp Proc. 2008:429-433.

15. Goldstein J. Private practice outcomes: validated outcomes data collection in private practice. Clin Orthop Relat Res. 2010;468(10): 2640-2645.

16. Sequist TD, Schneider EC, Anastario M, et al. Quality monitoring of physicians: linking patients’ experiences of care to clinical quality and outcomes. J Gen Intern Med.= 2008;23(11):1784-1790.

17. Salazar WH. Management of depression in the outpatient office. Med Clin North Am. 1996;80(2):431-455.

18. Hirschfeld RM, Keller MB, Panico S, et al. The National Depressive and Manic-Depressive Association consensus statement on the undertreatment of depression. JAMA. 1997;277(4):333-340.

19. Baik S, Bowers BJ, Oakley LD, Susman JL. The recognition of depression: the primary care clinician’s perspective. Ann Fam Med. 2005;3(1):31-37.

20. Nyweide DJ, Weeks WB, Gottlieb DJ, Casalino LP, Fisher ES. Relationship of primary care physicians’ patient caseload with measurement of quality and cost performance. JAMA. 2009;302(22):2444-2450.

21. Accreditation Council for Graduate Medical Education. Common Program Requirements. Effective July 1, 2011. http://www.acgme.org/ acWebsite/home/Common_Program_Requirements_07012011.pdf. Accessed October 13, 2010.

22. National Committee on Quality Assurance. Patient-centered medical home. http://www.ncqa.org/tabid/631/Default.aspx. Accessed October 15, 2010.

23. American Academy of Family Physicians, American Academy of Pediatrics, American College of Physicians, American Osteopathic Association. Joint Principles of the Patient-Centered Medical Home. http://www.acponline.org/advocacy/where_we_stand/medical_home/approve_jp.pdf. Published March 2007. Accessed October 15, 2010.

24. Veilkova G, Wright EP, Smith AB, et al. Automated collection of quality-of-life data: a comparison of paper and computer touch-screen questionnaires. J Clin Oncol. 1999;17(3):998-1007.

25. Drummond HE, Ghosh S, Ferguson A, Brackenridge D, Tiplady B. Electronic quality of life questionnaires: a comparison of pen-based electronic questionnaires with conventional paper in a gastrointestinal study. Qual Life Res. 1995;4(1):21-26.

26. Lee SJ, Kavanaugh A, Lenert L. Electronic and computer-generated patient questionnaires in standard care. Best Pract Res Clin Rheumatol. 2007;21(4):637-647.

27. Johnson LC, Beaton R, Murphy S, Pike K. Sampling bias and other methodological threats to the validity of health survey research. Int J Stress Manag. 2000;7(4):247-267.

28. Crowne D, Marlowe D. The Approval Motive: Studies in Evaluative Dependence. New York: Wiley; 1964.

29. Guilford JP. Psychometric Methods. 2nd ed. New York: McGraw-Hill; 1954.

30. Podsakoff PM, MacKenzie SB, Lee JY, Podsakoff NP. Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol. 2003;88(5):879-903.

31. Sax L, Gilmartin SK, Bryant AN. Assessing response rates and nonresponse bias in web and paper surveys. RHEJ. 2003;44(4):409-432.

32. Nicholas D, Huntington P, Williams P, Vickery P. Health information: an evaluation of the use of touch screen kiosks in two hospitals. Health Info Libr J. 2001;18(4):213-219.

33. Huntington P, Williams P, Nichols D. Age and gender differences of a touch-screen kiosk: a study of kiosk transaction log files. Inform Prim Care. 2002;10(1):3-9.

34. Wendler D, Kington R, Madans J, et al. Are racial and ethnic minorities less willing to participate in health research? PLoS Med. 2006;3(2) e19.

35. Smedley BD, Stith AY, Nelson AR, eds. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. Washington, DC: The National Academies Press; 2003.

36. Johnson RL, Saha S, Arbelaez JJ, Beach MC, Cooper LA. Racial and ethnic differences in patient perceptions of bias and cultural competence in health care. J Gen Intern Med. 2004;19(2):101-110.

37. Chang BL, Bakken S, Brown SS, et al. Bridging the digital divide: reaching vulnerable populations. J Am Med Inform Assoc. 2004;11(6): 448-457.

38. Dickerson S, Reinhart AM, Feeley TH, et al. Patient Internet use for health information at three urban primary care clinics. J Am Med Inform Assoc. 2004;11(6):499-504.

Related Videos
dr jennifer green
dr jennifer green
dr ken cohen
dr ian neeland
Yael Mauer, MD, MPH
Pregnant Patient | image credit: pressmaster - stock.adobe.com
Diana Isaacs, PharmD
Beau Raymond, MD
Robert Zimmerman, MD
Beau Raymond, MD
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo