Publication

Article

The American Journal of Managed Care

February 2023
Volume29
Issue 2

Association Between Use of Clinician Performance Information and Patient Experience

High patient experience scores were associated with the collection and use of any clinician performance information, especially with whether the practice shared this information internally to compare.

ABSTRACT

Objectives: To study the association between the collection and use of clinician performance information in physician practices and patient experience in primary care.

Study Design: Patient experience scores are calculated from the 2018-2019 Massachusetts Statewide Survey of Adult Patient Experience of Primary Care. Physicians were attributed to physician practices using the Massachusetts Healthcare Quality Provider database. Scores were matched to information on the collection or use of clinician performance information from the National Survey of Healthcare Organizations and Systems using practice name and location.

Methods: We conducted observational multivariant generalized linear regression at the patient level where the dependent variables were 1 of 9 patient experience scores and the independent variables were 1 of 5 domains in the collection or use of performance information of the practice. Patient-level controls included self-reported general health, self-reported mental health, age, sex, education, and race/ethnicity. Practice-level controls include the size of the practice and the availability of weekend and evening hours.

Results: Nearly 90% of practices in our sample collect or use clinician performance information. High patient experience scores were associated with whether any information was collected and used, especially with whether the practice shared this information internally to compare. Among practices that used clinician performance information, patient experience was not associated with whether the information was used in more aspects of care.

Conclusions: The collection and use of clinician performance information were associated with better primary care patient experience among physician practices. Deliberate efforts to use clinician performance information in ways that cultivate clinicians’ intrinsic motivation may be especially effective for quality improvement.

Am J Manag Care. 2023;29(2):e51-e57. https://doi.org/10.37765/ajmc.2023.89321

_____

Takeaway Points

High patient experience scores were associated with the collection and use of any clinician performance information, especially with whether the practice shared this information internally to compare.

  • Whether a physician practice uses clinician performance information may be used to identify high-quality practices.
  • Deliberate efforts to use clinician performance information in ways that cultivate clinicians’ intrinsic motivation may be especially effective for quality improvement.
  • Our findings suggest that policies that impose requirements on the collection and measurement of quality information alone may not be sufficient tools for quality improvement.
  • Future studies may directly test the effect of policies that encourage the use of quality information on quality improvement.

_____

Nearly half of all primary care physicians and specialists are in group practices.1 Although many hope that group practices may have more tools and resources to improve the quality of care, much remains unknown about which specific tools leveraged by group practices may improve quality. One increasingly common practice in physician practices is to collect information on clinician performance.2-4 Some organizations and policy makers hope that collecting this information can be an important step toward improving care quality.5,6

As primary care delivery increasingly focuses on patient-centeredness, it is important to ask whether initiatives commonly used by primary care organizations, such as the collection and use of clinician performance information, translate to better care as perceived by patients.7 There are many ways in which the collection and use of performance information could improve patient experience. When used well, performance information may allow organizations to identify problem areas, optimize the allocation of resources, and share best practices, all of which may improve patient experience. Although many health care organizations noted benefits from the increasingly common practice of performance information collection, evidence on the relationship between performance measurement and patient experience is still rare. CMS develops, implements, and administers several patient experience surveys. A few studies have looked at the relationship between various organizational practices and policies and patient experience measures.8 Data from several studies have shown patient experience measures to be related to clinical quality,9-15 although most studies have focused on the Medicare population.16,17 A number of recent studies have also explored the association between organizational characteristics and quality outcomes, also with a focus on the Medicare population.18,19

This study explores the association between the collection and use of clinician performance information and patient experience in primary care services in the commercially insured population using data from 2 innovative surveys. Findings from this study could help identify high-quality practices and help practices evaluate the effectiveness of their policies.

METHODS

Data and Variables

This study used 2 survey data sets. Data on the use of clinician performance information came from the 2017-2018 National Survey of Healthcare Organizations and Systems–Practice Level (NSHOS-P). NSHOS was a collection of nationally representative surveys that aimed to characterize the structure, ownership, leadership, and care delivery capabilities of health care systems; physician practices with 3 or more primary care physicians; and hospitals. We used only the physician practice component of the survey and focused on practices from Massachusetts. A total of 236 physician practices with 3 or more primary care physicians were surveyed in Massachusetts; 82 responded, for a response rate of 35%. From this survey, we extracted 5 measures based on answers to 3 survey questions related to the use of clinician performance information. These measures were:

  • Number of areas on which the practice collects information for individual clinician performance (out of 7 clinical areas)
  • Number of areas from which the practice uses information for individual clinician performance for feedback (out of 7 aspects of patient care)
  • Number of areas from which the practice uses information for individual clinician performance for internal quality improvement (out of 7 aspects of patient care)
  • Number of areas from which the practice uses information for individual clinician performance for physician compensation (out of 7 aspects of patient care)
  • Whether reports are shared within the group in a way that an individual clinician can compare their performance with that of other clinicians within the practice (yes or no)

For each of the first 4 measures, we constructed a binary variable that indicated whether the practice collected or used no (0) or any (1-7) information. These binary variables and the binary indicator from the last measure constituted the 5 independent variables on the use of clinician performance information in our primary analysis. Among practices that did collect or use any clinician performance information, we further divided them into practices that collected and used this information in a high number of areas or aspects (4-7) and practices that collected and used this information in a low number of areas or aspects (1-3). The distribution of these 5 measures of clinician performance across the practices in our sample can be found in Table 1. The details of the sampling framework and survey answers can be found in eAppendix A and eAppendix B (eAppendices available at ajmc.com).

A second data source was the Massachusetts Health Quality Partners (MHQP) Statewide Survey of Adult Patient Experience of Primary Care Data (a component of the MHQP Statewide Patient Experience Survey, or MHQP-PES). This was a statewide mail survey sent to patients who had had primary care visits in the prior 12 months. All patients in the sample were required to have a primary care provider (PCP) by their commercial health plan, and patient responses were attributed to the PCP listed on their health plan. The survey asked patients to rate the quality of certain doctor-patient interactions and other aspects of care using a nationally developed standard survey instrument. The surveys included in this study were fielded from April to July in 2018 and 2019. They sampled commercially insured patients from 771 adult and 315 pediatric primary care practices of 1576 statewide. A total of 41,976 responses were received of 192,625 adult patient samples, with a response rate of 21.8%.

The MHQP-PES used in this study is based on the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Clinician & Group Survey, 3.0.20 The survey had 19 items that made up 7 composites (communication [4 items], integration of care [3 items], organizational access [3 items], self-management support [2 items], knowledge of patient [2 items], adult behavioral health [2 items], and office staff [2 items]) and 2 individual items that measured overall ratings (willingness to recommend and provider rating). The mean of each composite score can be found in Table 2, and the distribution of the number of responses per practice in our sample can be found in eAppendix Table 1. The components were calculated for adult patients only. Details on the survey questions can be found in eAppendix C.

The NSHOS-P and MHQP-PES data were matched on the names and addresses of the physician practices. The study sample included data from 73 practices that had both NSHOS-P and reliable MHQP-PES data (see eAppendix C for a definition of reliable survey data). The merged data were at the patient respondent level (for MHQP-PES respondents). The construction of the sample practices is shown in the Figure.

Controls

From NSHOS-P responses, we created a continuous variable of the number of employed and affiliated/contracted providers at a practice (Q.55: What is your best estimate of the total number of full- and part-time providers in your practice?). We controlled for practice size because large and small practices may have different levels of resources that may affect both patient experience and the practices’ ability to collect and use clinician performance information. We also controlled for whether the practice offered weekend or evening hours, as the availability of appointments was likely to affect patient experience. From the MHQP-PES, we used responses from questions that asked about self-reported general health, self-reported mental health, age, sex, education, and race/ethnicity to generate patient-level controls. For self-reported general and mental health, responses were on a 5-point Likert-type scale. We scored these as: excellent = 5, very good = 4, good = 3, fair = 2, and poor = 1. Response options for age were binned into: 18 to 24 years = 1; 25 to 34 = 2; 35 to 44 = 3; 45 to 54 = 4; 55 to 64 = 5; 65 to 74 = 6; and 75 or older = 7. For sex, response options were male = 1 and female = 0. For education, response options were scored as follows: eighth grade or less = 1; some high school, but did not graduate = 2; high school graduate or General Educational Development = 3; some college or 2-year degree = 4; 4-year college graduate = 5; and more than 4-year college degree = 6. For race/ethnicity, we included an indicator each for whether the individual was White, Black, Asian, or Hispanic (not mutually exclusive).

METHODS

To assess the relationship between the use of clinician performance information and patient experience, we regressed each composite score of patient experience on each of the 5 measures of the use of clinician performance measures with controls, using data at the respondent level (for MHQP-PES respondents). Data on patient experience measures and patient characteristics varied by patients, but other measures only varied across practices. Specifically, we ran the generalized linear regression:

PatientExperienceScoreij = β1NSHOSmeasurej + β2Xij + αij + εij

where PatientExperienceScoreij was 1 of the 9 patient experience scores for patient i going to practice j, NSHOSmeasurej was 1 of the 5 measures on the use of clinician performance information from the NSHOS survey, and Xij was a list of controls. The SEs were clustered at the practice level. To address the concerns of multiple hypothesis testing, we adjusted the P values of the estimates using a free step-down resampling method.21 This method is a more powerful alternative than the commonly used Bonferroni method for controlling familywise error rate in situations of multiple hypothesis testing.21,22

In addition, for each independent variable, we generated a composite measure of the overall effect of that clinician performance measure on all 9 dimensions of patient experience by taking a weighted mean of the individual regression coefficients, weighted by the SD of each of the 9 patient experience scores. In addition to providing a summary measure of the relationship between all patient experience domains and the collection/use of clinician performance measures, this composite measure reduced the number of statistical tests and partially alleviated the concerns for multiple hypothesis testing as well.21 The exact calculation of the standardized coefficient can be found in eAppendix D.

RESULTS

Table 2 shows the mean patient experience scores for all practices and stratified by whether the practice collected or used clinician performance information. Overall, patient experience scores were quite high, with means higher than 85 (out of 100) for most patient experience score measures. Two aspects of patient experience, self-management and adult behavioral health, had slightly lower means at 63 and 75. These 2 measures also had higher SDs and more variations across practices than other patient experience measures. The intraclass (class = practice) correlations for the patient experience measures were between 0.02 and 0.05, suggesting that the variation in patient experience scores within a practice was much greater than the variation in scores across practices. Table 2 also shows close similarities in most patient experience measures between practices that collected and used clinician performance information and practices that did not. The 1 exception was that practices that shared performance information internally for comparison appeared to have higher patient experience scores, but it was not possible to reject this difference as due to chance given the large SDs. These mean comparisons also did not adjust for any other practice-level characteristics or differences in patient compositions between practices that collected or used clinician performance information and practices that did not.

The regression coefficients of patient experience scores on different measures of the use of clinician performance information are shown in Table 3. After controlling for patient and organizational characteristics, the standardized coefficient on whether a practice collected any information on the primary care physician was positive but not significant at the 5% level. When looking at the 9 domains of patient experience scores separately, practices that collected any information on clinician performance performed better in only 2 domains: integration of care and organizational access (a measure of whether patients were getting timely appointments, care, and information). The coefficients on the other 7 domains were very imprecisely estimated, with adjusted P values greater than .999 and CIs too broad to be meaningful.

The standardized coefficients on whether the practice used clinician performance information for internal quality improvement (QI), feedback, physician compensation, and internal comparison were all positive, but the coefficient was not significant at the 5% level for whether the practice used clinician performance information for feedback. Practices that used clinician performance information for internal QI efforts had patient experience scores that were 0.09 SDs higher than those of practices that did not use clinician performance information for internal QI efforts. Practices that used clinician performance information for physician compensation had patient experience scores that were 0.07 SDs higher than those of practices that did not use the information for physician compensation. Practices that shared clinician performance quality information internally for comparison had patient experience scores that were 0.19 SDs higher than those of practices that did not share such information. Practices that shared clinician performance quality information internally for comparison reported higher patient experience scores in 6 of 9 domains.

Table 4 shows the regression coefficient on whether the practices used clinician performance information in a high number of aspects (4-7) or a low number of aspects (1-3) among practices that collected or used clinician performance information at all. Because this comparison was performed on a select subsample, the coefficients were much less precisely estimated. Overall, using clinician performance information in a high number of areas did not have a statistically significant association with higher patient experience in any domain.

DISCUSSION

High patient experience scores in primary care were associated with whether clinician performance information was collected and used at the physician practice level. The association was strongest for whether the practice shared reports from clinician performance information internally for comparison. Among practices that collected or used clinician performance information, using this information in more clinical areas was not associated with higher patient experience scores.

The collection of clinician performance information was associated with greater integration of care and organizational access. This may be because the collection of such information enabled the practice to better match patients to clinicians within the organization based on patient needs and clinician expertise, leading to more effective management of the caseload.

Compared with whether practices collected any clinician performance information, measures on whether the practice used clinician performance information were associated with a broader set of patient experience measures. This was especially true for whether the practice shared such information internally for comparison and less true for whether the practices used such information for physician compensation. Part of the reason that the correlation was especially strong for whether the practices shared reports from clinician performance information internally for comparison may be that these practices were especially good at encouraging peer feedback and harnessing the intrinsic motivation of clinicians to provide high-quality care. This is consistent with findings from 2 strands of research. First, previous research findings have shown that team-based work may improve quality in primary care.19,23 Second, previous research findings have shown that intrinsic motivation may be more powerful in leading to desirable behavior than extrinsic motivation (such as performance payment or fines).24-26

Among the 9 measures of patient experience, integration of care had a consistently positive correlation with the collection and use of information. This might be because integrated care management programs have gained increasing attention as a tool to improve primary care quality in the past decade.27,28 We did not find a positive association that is significant at the 5% level between the collection and use of clinician performance information and patient experience with office staff or related to adult behavioral health. This might be because efforts that are centered on clinicians are unlikely to affect aspects of patient care that rely heavily on nonphysicians and other professionals. For example, adult behavioral health care usually involves a multidisciplinary team that includes physicians, nurses, administrative staff, and social workers.29

Our results suggest that using clinician performance information may contribute to positive improvement in patient experience, and such contribution is likely to be greatest if the collected information is used deliberately with a goal for QI. Future research may seek to understand more about the mechanism through which the deliberate use of clinician performance information may lead to QI and the characteristics of practices that collected performance information without effectively using it. It is possible that these practices may be collecting information as a passive response to a requirement by the health system, the payer, or the regulator. It is also possible that these practices lack the resources to translate information to QI. In eAppendix Table 2, we show that practices collecting clinician performance information are much less likely to have weekend and evening hours than practices not collecting such information. Although our study controls for the availability of weekend and evening hours, this pattern suggests the possibility that with limited resources, practices may have to choose between opening for more hours and allocating resources to collecting and using clinician performance information. eAppendix Table 2 also shows that practices that collect or use clinician performance information on average scored higher on a composite measure of how well they incorporate patient input and on a composite score for team culture. Such higher scores in incorporating patient input and team culture could be the result of deliberate efforts to use clinician performance information and could serve as mechanisms through which greater use of performance information may improve patient experience.

Overall, although practices that collected or used clinician performance information had higher patient experience scores, the difference was relatively small compared with the overall variation in patient experience scores across practices. This suggests that although the use of clinician performance information may matter for patient experience, patient experience may be influenced by many other factors.

Limitations

This study has some limitations. It covers a relatively small number of practices in a single state. Both surveys have response rates below 50%. Although these rates are not unusually low for surveys of such type, nonresponses may affect the representativeness of our sample. For example, physician practices that respond to the survey may be the ones that have more resources or may be the ones with clinical or organizational initiatives that they are eager to make known. They may be both more likely to collect and use clinician performance information and more likely to have other resources to improve patient experience. For the MHQP-PES, patients may be more likely to respond to the survey if they have had particularly good or particularly bad experiences with their PCPs. Such selective responses may affect the observed relationship between the collection and use of clinician performance information and patient experience in either direction. In addition, eAppendix Table 3 shows that patients included in our study had a lower likelihood of being Black or Hispanic than the general Massachusetts population (4% and 5% vs 9% and 12%, respectively).30 This may be because Black or Hispanic patients were underrepresented in the practices that responded to the NSHOS-P surveys or among the patients who responded to the MHQP surveys. This difference limits the applicability of our findings to these groups of patients.

A second limitation of this study is that it focuses on the commercially insured population only and may not be representative of the experience of patients with publicly funded insurance. Although the collection and use of clinician performance information are practice-wide, patients with publicly funded insurance may have different patient experiences, especially in aspects related to organizational access.

Last, the association does not speak to the causal relationship between the use of clinician performance information and patient experience. It is possible that practices that focused on the collection and use of clinician performance information also focused on other practices that may have led to QI. Nevertheless, the findings of this study add to our knowledge of the relationship between organizational practices and patient experience in the commercially insured population. They could also help us identify high-quality practices and generate hypotheses for further testing.

CONCLUSIONS

The collection and use of clinician performance information were associated with better primary care patient experience among physician practices. Deliberate efforts to use clinician performance information in ways that cultivate clinicians’ intrinsic motivation may be especially effective for quality improvement. 

Author Affiliations: RAND Corporation (RAZ), Boston, MA;Massachusetts Health Quality Partners (NM, RR, JC), Brighton, MA; Department of Economics, Harvard University (DC), Cambridge, MA.

Source of Funding: This work was supported by the Agency for Healthcare Research and Quality’s Comparative Health System Performance Initiative under grant No. 1U19HS024075 and grant No. U19HS024074.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (NM, RR, JC, DC); acquisition of data (NM, RR, JC); analysis and interpretation of data (RAZ, NM, RR, DC); drafting of the manuscript (RAZ, NM, DC); critical revision of the manuscript for important intellectual content (RAZ, NM, JC, DC); statistical analysis (NM, RR); obtaining funding (DC).

Address Correspondence to:R. Annetta Zhou, PhD, RAND Corporation, 20 Park Plaza #920, Boston, MA 02116. Email: azhou@rand.org.

REFERENCES

1. Muhlestein DB, Smith NJ. Physician consolidation: rapid movement from small to large group practices, 2013–15. Health Aff (Millwood). 2016;35(9):1638-1642. doi:10.1377/hlthaff.2016.0130

2. Friedberg MW, SteelFisher GK, Karp M, Schneider EC. Physician groups’ use of data from patient experience surveys. J Gen Intern Med. 2011;26(5):498-504. doi:10.1007/s11606-010-1597-1

3. Safran DG, Karp M, Coltin K, et al. Measuring patients’ experiences with individual primary care physicians. J Gen Intern Med. 2006;21(1):13-21. doi:10.1111/j.1525-1497.2005.00311.x

4. Stanowski AC, Simpson K, White A. Pay for performance: are hospitals becoming more efficient in improving their patient experience? J Healthc Manag. 2015;60(4):268-285.

5. Browne K, Roseman D, Shaller D, Edgman-Levitan S. Measuring patient experience as a strategy for improving primary care. Health Aff (Millwood). 2010;29(5):921-925. doi:10.1377/hlthaff.2010.0238

6. Chassin MR. Achieving and sustaining improved quality: lessons from New York State and cardiac surgery. Health Aff (Millwood). 2002;21(4):40-51. doi:10.1377/hlthaff.21.4.40

7. Davis K, Schoenbaum SC, Audet AM. A 2020 vision of patient-centered primary care. J Gen Intern Med. 2005;20(10):953-957. doi:10.1111/j.1525-1497.2005.0178.x

8. Goff SL, Mazor KM, Priya A, Pekow PS, Lindenauer PK. Characteristics of high-performing primary care pediatric practices: a qualitative study. Acad Pediatr. 2020;20(2):267-274. doi:10.1016/j.acap.2019.04.005

9. Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71(5):522-554. doi:10.1177/1077558714541480

10. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41-48.

11. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):e001570. doi:10.1136/bmjopen-2012-001570

12. Kennedy GD, Tevis SE, Kent KC. Is there a relationship between patient satisfaction and favorable outcomes? Ann Surg. 2014;260(4):592-598. doi:10.1097/SLA.0000000000000932

13. Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med. 2013;368(3):201-203. doi:10.1056/NEJMp1211775

14. Sacks GD, Lawson EH, Dawes AJ, et al. Relationship between hospital performance on a patient satisfaction survey and surgical quality. JAMA Surg. 2015;150(9):858-864. doi:10.1001/jamasurg.2015.1108

15. Tsai TC, Orav EJ, Jha AK. Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2-8. doi:10.1097/SLA.0000000000000765

16. Luxford K, Safran DG, Delbanco T. Promoting patient-centered care: a qualitative study of facilitators and barriers in healthcare organizations with a reputation for improving the patient experience. Int J Qual Health Care. 2011;23(5):510-515. doi:10.1093/intqhc/mzr024

17. Nyweide DJ, Lee W, Cuerdon TT, et al. Association of Pioneer Accountable Care Organizations vs traditional Medicare fee for service with spending, utilization, and patient experience. JAMA. 2015;313(21):2152-2161. doi:10.1001/jama.2015.4930

18. Ahluwalia SC, Harris BJ, Lewis VA, Colla CH. End-of-life care planning in accountable care organizations: associations with organizational characteristics and capabilities. Health Serv Res. 2018;53(3):1662-1681. doi:10.1111/1475-6773.12720

19. Fraze TK, Lewis VA, Tierney E, Colla CH. Quality of care improves for patients with diabetes in Medicare shared savings accountable care organizations: organizational characteristics associated with performance. Popul Health Manag. 2018;21(5):401-408. doi:10.1089/pop.2017.0102

20. CAHPS Clinician & Group Survey. Agency for Healthcare Research and Quality. Updated August 2021. Accessed August 15, 2021. https://www.ahrq.gov/cahps/surveys-guidance/cg/index.html

21. Anderson ML. Multiple inference and gender differences in the effects of early intervention: a reevaluation of the Abecedarian, Perry Preschool, and early training projects. J Am Stat Assoc. 2008;103(484):1481-1495. doi:10.1198/016214508000000841

22. Westfall PH, Young SS, Wright SP. On adjusting P-values for multiplicity. Biometrics. 1993;49(3):941-945. doi:10.2307/2532216

23. Nguyen KH, Chien AT, Meyers DJ, Li Z, Singer SJ, Rosenthal MB. Team-based primary care practice transformation initiative and changes in patient experience and recommended cancer screening rates. Inquiry. 2020;57:46958020952911. doi:10.1177/0046958020952911

24. Song H, Tucker AL, Murrell KL, Vinson DR. Closing the productivity gap: improving worker productivity through public relative performance feedback and validation of best practices. Manag Sci. 2018;64(6):2628-2649. doi:10.1287/mnsc.2017.2745

25. Kolstad JT. Information and quality when motivation is intrinsic: evidence from surgeon report cards. Am Econ Rev. 2013;103(7):2875-2910. doi:10.1257/aer.103.7.2875

26. Gneezy U, Rustichini A. A fine is a price. J Leg Stud. 2000;29(1):1-17. doi:10.1086/468061

27. Hong CS, Siegel AL, Ferris TG. Caring for high-need, high-cost patients: what makes for a successful care management program? Issue Brief (Commonw Fund). 2014;19:1-19.

28. Farrell TW, Tomoaia-Cotisel A, Scammon DL, Day J, Day RL, Magill MK. Care management: implications for medical practice, health policy, and health services research. Agency for Healthcare Research and Quality. April 2015. Accessed May 23, 2022. https://www.ahrq.gov/sites/default/files/publications/files/caremgmt-brief.pdf

29. Robinson P, Oyemaja J, Beachy B, et al. Creating a primary care workforce: strategies for leaders, clinicians, and nurses. J Clin Psychol Med Settings. 2018;25(2):169-186. doi:10.1007/s10880-017-9530-y

30. QuickFacts: Massachusetts. United States Census Bureau. Accessed May 25, 2022. https://www.census.gov/quickfacts/MA

Related Videos
Dr Cesar Davila-Chapa
Milind Desai, MD
Masanori Aikawa, MD
Screenshot of Adam Colborn, JD during an interview
Cesar Davila-Chapa, MD
Female doctor in coat with stethoscope on blue background - Pixel-Shot - stock.adobe.com
Krunal Patel, MD
Juan Carlos Martinez, MD
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo