Publication

Article

The American Journal of Managed Care

October 2019
Volume25
Issue 10

Physician Clinical Knowledge, Practice Infrastructure, and Quality of Care

Patient-centered practice infrastructure was associated with better care quality only among physicians who scored well on their Maintenance of Certification exam.

ABSTRACT

Objectives: To understand if and how one dimension of physician skill, clinical knowledge, moderates the relationship between practice infrastructure and care quality.

Study Design: We included 1301 physicians who certified in internal medicine between 1991 and 1993 or 2001 and 2003 and took the American Board of Internal Medicine (ABIM)’s Maintenance of Certification (MOC) exam and completed ABIM’s diabetes or hypertension registry during their 10-year recertification period between 2011 and 2014. Composite quality scores (overall, process, and intermediate outcome) were based on chart abstractions. Practice infrastructure scores were based on a web-based version of the Physician Practice Connections Readiness Survey. Our measure of clinical knowledge was drawn from MOC exam performance.

Methods: We regressed a physician’s composite care quality scores against the interaction between their practice infrastructure and MOC exam scores with controls for physician, practice, and patient panel characteristics.

Results: We found that a physician’s exam performance significantly moderated the association between practice infrastructure and care quality (P for interaction = .007). For example, having a top quintile practice infrastructure score was associated with a quality care score that was 7.7 (95% CI, 4.3-11.1) percentage points (P <.001) higher among physicians scoring in the top quintile of their MOC exam, but it was unrelated (0.7 [95% CI, —3.8 to 5.3] percentage points; P = .75) to quality among physicians scoring in the bottom quintile on the exam.

Conclusions: Physician skill, such as clinical knowledge, is important to translating patient-centered practice infrastructure into better care quality, and so it may become more consequential as practice infrastructure improves across the United States.

Am J Manag Care. 2019;25(10):497-503Takeaway Points

Physician clinical knowledge significantly moderated the relationship between patient-centered practice infrastructure and care quality.

  • We found that better practice infrastructure was associated with higher-quality care, especially performance on process measures, only among physicians scoring in the top 2 or 3 quintiles on their Maintenance of Certification exam.
  • These data suggest that individual physician skill may become more important as practice infrastructure improves across the country.
  • In addition, these findings highlight that it is important for patients to be informed about both practice infrastructure quality and a physician’s skill when selecting a doctor.

About a decade ago, several important policies were initiated that promoted practice infrastructure to support patient-centered care.1-4 In response, the adoption of basic electronic health records (EHRs) increased from 20% to more than 50% for office-based physicians between 2009 and 2015, and the number of providers participating in medical home initiatives, which utilize practice infrastructure to promote patient-centered care, increased from 14,000 to more than 63,000 during this period.4-7 Underlying these policies is the assumption that better practice infrastructure would lead to better patient care, although subsequent research consisting mostly of patient-centered medical home evaluations has yielded mixed evidence regarding the effectiveness of improving practice infrastructure.2,8-12 That said, heterogeneity observed across medical home interventions suggests that idiosyncratic aspects of the practices themselves may be critical to translating infrastructure improvements into higher-quality care.11

One such factor that may be important to effectively leveraging infrastructure is the underlying skill of the physicians whose care is supported by practice infrastructure. Conceivably, practice infrastructure could preferentially benefit physicians with either higher or lower skill. For example, it might be that lower-skilled physicians benefit more from infrastructure supports, such as embedded clinical reminders or guideline standards, when developing care plans. Alternatively, synergy could exist between a physician’s skills and the ready access to patient information and electronic tools that are embedded in higher-quality systems, and so higher-skilled providers might deliver better care than lower-skilled providers when supported by practice infrastructure. However, we are unaware of any studies examining whether and how physician skill affects the relationship between practice infrastructure and care quality.

We address this gap by examining whether one dimension of physician skill, namely a physician’s clinical knowledge as measured by performance on the American Board of Internal Medicine (ABIM)’s Maintenance of Certification (MOC) examination, moderates the relationship between practice infrastructure and the quality of diabetes or hypertension care among general internists.

METHODS

We identified 1301 primary care physicians (ie, nonsubspecializing general internists) who initially certified between either 1991 and 1993 or 2001 and 2003 and participated in either a diabetes or hypertension Process Improvement Module (PIM) registry from 2011 through 2014.13,14 PIM registries were completed during either the first or second 10-year recertification cycle and involved physicians recording data used to assess their care quality and completing a web-based version of the Physician Practice Connections Readiness Survey (PPC-RS) developed by the National Committee for Quality Assurance.15,16 Data were compiled and merged using ABIM administrative identifiers, which were removed prior to analysis. The study was deemed exempt from review by the Advarra institutional review board.

Measures of Physician Care Quality

Each physician in the study abstracted information from 25 sequential (or randomly chosen) charts for patients aged 15 to 90 years with a visit in the past year and noted as having the applicable condition for at least a year. Prior research has demonstrated that patients treated by physicians who elected to complete the PIMs are representative of the patients with diabetes and hypertension typically treated by board-certified internists.17 From each chart, physicians abstracted the patient’s receipt of screening/monitoring tests, laboratory values, and demographics, which were used to assess concordance with process and intermediate outcome quality measures.18,19 Quality measures were used to construct 3 composites using an algorithm designed by an expert panel—(1) an overall quality composite, (2) a process composite, and (3) an intermediate outcomes composite&mdash;that each reflect the weighted average compliance across the individual measures (ranging from 0% to 100%; see eAppendix A [eAppendices available at ajmc.com] for individual measure criteria and scoring details).20-22

Measures of Practice Infrastructure

As part of the PIM, each physician completed a web-based version of the PPC-RS.15,16 This survey is a self-assessment tool designed to enable physicians to benchmark their practice infrastructure across 7 subscores: (1) quality measurement and improvement; (2) patient data-tracking systems to follow patients with conditions identified by the physician as important to their practice; (3) appropriate standards, such as guideline reminders; (4) proactive management of important conditions; (5) patient-centered self-care support and education; (6) patient access and communication; and (7) systematic processes such as e-prescribing, test tracking, referral tracking, and interoperable EHRs. The overall score ranges from 0 to 100 and was computed using the National Committee for Quality Assurance scoring algorithm.15,16

Measure of Physician Clinical Knowledge

Our measure of physician clinical knowledge was their performance on ABIM’s MOC exam. Exam scores were equated to account for differences in difficulty between exam administrations and scored on a 200- to 800-point scale.23

Physician and Patient Characteristics

Physician control variables included their gender, whether they were born and/or attended medical school in the United States, the year and type of PIM completed, and whether they completed the PIM during their first or second round of MOC. Practice characteristics used included practice setting (ie, academic, group, or other), size, and HHS region.24,25 We also included zip code—level median household income and hospital service area–level age-, sex-, and race-adjusted Medicare mortality rate to account for the general well-being of the community in which each physician practices.26,27 Patient panel characteristics, collected in the PIMs for risk adjustment, were used as control variables in the analysis, including the percentage of women, percentage in each racial group (white non-Hispanic, black non-Hispanic, Hispanic, Asian or Pacific Islander, or other), percentage in each age group (<45, 45-54, 55-64, 65-74, or >74 years), and percentage with barriers to care (due to language or insurance/payment issues).

Empirical Methods

We used a share regression (ie, fractional response probit regression) to measure associations with our care quality measures.28,29 The advantage of this modeling approach is that it accounts for our dependent measure being a percentage score bounded by 0 and 1 (inclusive) and so, unlike linear regression, ensures that all predictions lie within this interval. Applying this model, we regressed each of the 3 quality composites against practice infrastructure score quintile indicators, MOC exam score quintile indicators, and the measures of physicians’ practice setting, training, demographic characteristics, and patient panel characteristics described previously. All controls were included in the final model, regardless of their statistical significance. To evaluate whether the relationship between quality care and practice infrastructure was moderated by physician clinical knowledge, we include interactions between the practice infrastructure score quintile indicators and MOC exam score quintile indicators. However, as we are using a nonlinear regression model, Ali and Norton point out that the magnitude, sign, and statistical significance of the interaction effect cannot be directly determined from the coefficients on the interaction terms.30 Karaca-Mandic and colleagues demonstrate that an appropriate method for calculating effects for such interactions is the cross partial derivative method.31 In our application, we used the margins function in Stata version 14 (Stata Corp; College Station, Texas) to estimate the percentage-point difference in quality score (ie, the marginal effect based on the average derivative across the sample) between each practice infrastructure quintile indicator and the lowest practice infrastructure quintile at each quintile of exam performance.31-33 We also estimated an overall P value, which tests the hypothesis that the value of the infrastructure—exam score quintile associations increases with higher value of both the exam score quintiles and practice infrastructure quintiles by applying a contrast test of the linear trend in predicted quality score marginal means across levels of both increasing practice infrastructure and increasing exam score quintiles using Stata’s margins, contrast function.34 Lastly, we conducted a policy simulation for each exam score quintile to estimate the overall change in quality scores that might be expected if we assumed each physician moved from their observed quintile of practice infrastructure to the top quintile level of practice infrastructure.

In addition to examining the overall practice infrastructure score, we also conducted a secondary analysis in which we regressed the overall quality composite score against practice infrastructure subscores controlling for the remaining infrastructure score (ie, the total infrastructure score minus the subscore).

RESULTS

We found that practice infrastructure was more strongly associated with the overall quality score among physicians who performed better on their MOC exam (overall P value of the interaction = .007) (Table 1). For example, as shown in Table 1, the regression-adjusted difference in the overall quality score between the top and bottom practice infrastructure quintiles is 7.7 percentage points (95% CI, 4.3-11.1; P <.001) among physicians who scored in the top quintile of their MOC exam compared with 0.7 percentage points (95% CI, —3.8 to 5.3; P = .75) among physicians who scored in the bottom quintile of their MOC exam. The Figure demonstrates that despite a negative correlation existing between the MOC exam and practice infrastructure scores (r = —0.10; P = .004), a sizable portion of high and low exam performers worked in both high- and low-infrastructure practices, suggesting enough variation existing within levels of exam performance to evaluate the interaction. See eAppendix B for the main effect associations without any interactions, eAppendix C for the probit regression coefficients, and eAppendix D for mean values of the control variables.

Table 2 describes the results of the policy simulation in which we estimated the potential increase in the overall quality score associated with moving every physician from their observed practice infrastructure quintile to the top practice infrastructure quintile. Among physicians scoring in the top exam quintile, transitioning each physician from their observed practice infrastructure quintile to the top quintile predicts an overall increase in the average overall quality score of 3.7 percentage points (95% CI, 1.4-6.0; P = .001). This increase in quality would account for a 12% reduction in the gap in quality between the average quality score (70.1%) and the maximum possible quality score. These simulations were not significant for physicians scoring in the bottom quintile (0.4 percentage points; 95% CI, —2.3 to 3.0; P = .79).

Table 3 describes the associations when we separately examine the process or intermediate outcome quality scores. The interaction between practice infrastructure and physician clinical knowledge was statistically significant only for the process composite score (overall P value of the interaction = .006). For example, among physicians with a top-quintile exam score, practicing in a top- versus bottom-quintile infrastructure system was associated with a process quality score that was 14.1 percentage points higher (95% CI, 9.0-19.1; P <.001). This association was 1.8 percentage points and not statistically significant (95% CI, —4.8 to 8.5; P = .59) for physicians who performed in the bottom quintile on their MOC examination.

Table 4 delineates analyses examining the 7 practice infrastructure subscores. We found that physician exam performance significantly moderated the association between infrastructure and overall quality for only subscore 7 (having systematic processes such as e-prescribing, test tracking, referral tracking, and interoperable EHRs). The association between a 5-point increase in this subscore was associated with an increase of 1.0 percentage points (95% CI, 0.1-2.0; P = .03) in the overall quality score among physicians in the top quintile of exam performance, but it was not statistically significant among physicians with bottom-quintile exam scores (—1.0 percentage points; 95% CI, –2.2 to 0.3; P = .13).

DISCUSSION

We found that individual physicians may be key to translating high-quality practice infrastructure into better care quality experienced by patients, as measured by each physician’s performance across an amalgam of individual process and intermediate outcome measures. In particular, we observed that better practice infrastructure was associated with higher care quality only among physicians with better clinical knowledge, as indicated by their scoring in either of the top 2 quintiles on ABIM’s MOC exam. In general, the observed association with care quality was largely driven by better performance on process-of-care measures.

Why might practice infrastructure matter for physicians who performed well on their MOC exam but not for physicians who performed more poorly? To help address this question, we examined the individual subscores that comprise the overall practice infrastructure score. We discovered significant differences in the practice infrastructure—quality relationship between top- and bottom-quintile exam performers for the practice infrastructure subscore related to having systematic processes for managing health information. This includes infrastructure elements such as having an EHR that stores structured patient data, including from external organizations; a system for managing and tracking referrals; and a system for ordering tests, retrieving results, and tracking whether they have been completed. This suggests that infrastructure elements that provide ready access to patient information and track whether care plans are being followed may be important drivers of quality among physicians with higher clinical knowledge. For example, a physician with a higher level of clinical knowledge may be able to take advantage of these infrastructure elements to ensure that their patients are following through on care plans that may be more comprehensive than for a physician with a lower level of clinical knowledge. However, if these 2 physicians work in a system where they have difficulty accessing patient records or tracking their patients’ health data, then neither might be able to consistently deliver high-quality care, regardless of how comprehensive their care plans are, because they may be limited in their ability to delineate patients who would benefit from more follow-up.

Another explanation for our findings is that a dimension of physician skill other than knowledge, such as professionalism, may be correlated with both clinical knowledge and diligence in applying the information and tools accessible in high infrastructure systems. For example, it could be that physicians with a greater commitment to professionalism may be both more committed to maintaining their clinical knowledge, and so score better on their MOC exam, and more motivated to use higher-quality infrastructure to improve their practice, and so take the time to experiment with and optimize use of these systems for improving care. However, in the absence of high-level practice infrastructure, these physicians may be hamstrung in their ability to improve quality. Supporting this, Haber and colleagues have found that performance on ABIM’s certification examination is correlated with other Accreditation Council for Graduate Medical Education physician core competencies, including patient care and professionalism.35 Furthermore, Pham and colleagues report based on interviews with physicians that those physicians who scored well on ABIM’s examination tended to be more reflective in discussing their clinical quality, often describing their “good performance as an amalgamation of skills.”36 In contrast, physicians who scored lower on their MOC exams tended to be less reflective and typically framed their quality in terms of their numerical scores compared with their peers.36

These findings suggest that evaluations of interventions designed to improve practice infrastructure should consider characteristics of the providers who comprise these practices, regardless of whether they are driven by differences in clinical knowledge or another correlated dimension of physician skill.37 This is consistent with a recent review of medical home evaluations that reported a great deal of heterogeneity in effectiveness across interventions, which the authors partially attributed to local contextual factors.10 In part, this could be due to differences in the medical home interventions themselves, such as variable implementation strategies or placing more or less emphasis on particular practice infrastructure elements.10-12 However, our findings suggest that characteristics of the physicians comprising a practice may be important to successfully translating medical home interventions into higher-quality care and so should be considered by policy makers looking to encourage such practice transformations. Further, that we found physicians who performed slightly less well on their MOC exam to have better practice infrastructure suggests that opportunities exist to more efficiently distribute infrastructure investment. These data suggest that it would be more efficient if high-quality infrastructure were positively correlated with exam performance, given our observation that the mixture of higher-quality infrastructure and better exam performance is associated with higher-quality care.

Limitations

Several limitations must be considered when applying our results. First, because we rely on cross-sectional comparisons, we cannot account for reverse causality in the practice infrastructure and care quality relationship. As such, the overall positive correlation observed between practice infrastructure and care quality might simply be the result of high-quality practices being more likely to improve their infrastructure. However, if these results were the product of high-quality practices acquiring better practice infrastructure, then one would expect the relationship between practice infrastructure and quality to be the same regardless of how the physicians who practice in these systems performed on their MOC exam. That we found the practice infrastructure—quality relationship to be moderated by a physician’s clinical knowledge suggests that for reverse causality, or an unobserved confounding factor, to explain our findings, it would have to be present only among physicians who scored well on their MOC exam and work in a high-infrastructure practice but not present among physicians who (1) scored well but work in a low-infrastructure practice, (2) scored poorly but work in a high-infrastructure practice, or (3) scored poorly and work in a low-infrastructure practice.

Another limitation is that these data reflect only physicians who completed a diabetes or hypertension PIM. In order to complete either PIM, a physician needs a patient panel that is comprised of enough patients with either condition. However, these conditions are widely prevalent among adults in the United States38,39 and so presumably most primary care physicians would have a practice capable of supporting either of these PIMs. Further, prior research using PIM data has found that the patients treated by physicians who completed a diabetes or hypertension PIM are representative of the patients typically cared for by primary care physicians certified in internal medicine.17 That said, these are all physicians who decided to maintain their certification and so these findings may not generalize to physicians who do not participate in MOC.

CONCLUSIONS

We found that better practice infrastructure was associated with higher-quality care only among physicians who performed well on their MOC exam. This finding implies that individual physician skill may be important to successfully translating infrastructure improvements into higher-quality care and that future research is needed to understand what types of support are needed to help lower-skilled physicians leverage practice infrastructure. This is especially important given that these latter physicians are likely to treat underserved populations.40 These findings also highlight that it is important for patients to be informed about both practice infrastructure quality and physician skill when selecting a doctor.Author Affiliations: American Board of Internal Medicine (JLV, BMG), Philadelphia, PA.

Source of Funding: Financial and material support was provided by the American Board of Internal Medicine. Data used in the current study were collected by the American Board of Internal Medicine.

These data were in part previously presented at the 2016 Academy Health Annual Research Meeting.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (JLV, BMG); analysis and interpretation of data (JLV, BMG); drafting of the manuscript (JLV, BMG); critical revision of the manuscript for important intellectual content (JLV, BMG); and statistical analysis (JLV, BMG).

Address Correspondence to: Jonathan L. Vandergrift, MS, American Board of Internal Medicine, 510 Walnut St, Ste 1700, Philadelphia, PA 19106. Email: Jvandergrift@abim.org.REFERENCES

1. American Academy of Family Physicians; American Academy of Pediatrics; American College of Physicians; American Osteopathic Association. Joint principles of the patient-centered medical home. American College of Physicians website. acponline.org/system/files/documents/running_practice/delivery_and_payment_models/pcmh/demonstrations/jointprinc_05_17.pdf. Published March 2007. Accessed August 26, 2019.

2. Rittenhouse DR, Shortell SM, Fisher ES. Primary care and accountable care—two essential elements of delivery-system reform. N Engl J Med. 2009;361(24):2301-2303. doi: 10.1056/NEJMp0909327.

3. Patient-centered medical home (PCMH). National Committee for Quality Assurance website. ncqa.org/programs/recognition/practices/patient-centered-medical-home-pcmh. Accessed August 26, 2019.

4. Washington V, DeSalvo K, Mostashari F, Blumenthal D. The HITECH era and the path forward. N Engl J Med. 2017;377(10):904-906. doi: 10.1056/NEJMp1703370.

5. Bitton A, Martin C, Landon BE. A nationwide survey of patient centered medical home demonstration projects. J Gen Intern Med. 2010;25(6):584-592. doi: 10.1007/s11606-010-1262-8.

6. Edwards ST, Mafi JN, Landon BE. Trends and quality of care in outpatient visits to generalist and specialist physicians delivering primary care in the United States, 1997-2010. J Gen Intern Med. 2014;29(6):947-955. doi: 10.1007/s11606-014-2808-y.

7. Office-based physician electronic health record adoption. Office of the National Coordinator for Health Information Technology website. dashboard.healthit.gov/quickstats/pages/physician-ehr-adoption-trends.php. Accessed March 1, 2018.

8. Grumbach K, Bodenheimer T. A primary care home for Americans: putting the house in order. JAMA. 2002;288(7):889-893. doi: 10.1001/jama.288.7.889.

9. Jackson GL, Powers BJ, Chatterjee R, et al. The patient centered medical home: a systematic review. Ann Intern Med. 2013;158(3):169-178. doi: 10.7326/0003-4819-158-3-201302050-00579.

10. Sinaiko AD, Landrum MB, Meyers DJ, et al. Synthesis of research on patient-centered medical homes brings systematic differences into relief. Health Aff (Millwood). 2017;36(3):500-508. doi: 10.1377/hlthaff.2016.1235.

11. Mahmud A, Timbie JW, Malsberger R, et al. Examining differential performance of 3 medical home recognition programs. Am J Manag Care. 2018;24(7):334-340.

12. David G, Saynisch PA, Smith-McLallen A. The economics of patient-centered care. J Health Econ. 2018;59:60-77. doi: 10.1016/j.jhealeco.2018.02.012.

13. Duffy FD, Lynn LA, Didura H, et al. Self-assessment of practice performance: development of the ABIM Practice Improvement Module (PIM). J Contin Educ Health Prof. 2008;28(1):38-46. doi: 10.1002/chp.154.

14. Holmboe ES, Meehan TP, Lynn L, Doyle P, Sherwin T, Duffy FD. Promoting physicians’ self-assessment and quality improvement: the ABIM diabetes practice improvement module. J Contin Educ Health Prof. 2006;26(2):109-119. doi: 10.1002/chp.59.

15. Holmboe ES, Arnold GK, Weng W, Lipner R. Current yardsticks may be inadequate for measuring quality improvements from the medical home. Health Aff (Millwood). 2010;29(5):859-866. doi: 10.1377/hlthaff.2009.0919.

16. Overview: Physician Practice Connections—Patient-Centered Medical Home (PPC-PCMH). Kansas Department of Health and Environment website. kdheks.gov/hcf/stakeholders/download/02032009PCMHOverview.pdf. Published 2008. Accessed August 26, 2019.

17. Gray BM, Weng W, Holmboe ES. An assessment of patient-based and practice infrastructure—based measures of the patient-centered medical home: do we need to ask the patient? Health Serv Res. 2012;47(1, pt 1):4-21. doi: 10.1111/j.1475-6773.2011.01302.x.

18. HEDIS measures and technical resources. National Committee for Quality Assurance website. ncqa.org/hedis/measures. Accessed August 26, 2019.

19. Rosenthal MB, Abrams MK, Bitton A; The Patient-Centered Medical Home Evaluators’ Collaborative. Recommended core measures for evaluating the patient-centered medical home: cost, utilization, and clinical quality. The Commonwealth Fund website. commonwealthfund.org/sites/default/files/documents/___media_files_publications_data_brief_2012_1601_rosenthal_recommended_core_measures_pcmh_v2.pdf. Published May 2012. Accessed August 26, 2019.

20. Hess BJ, Weng W, Holmboe ES, Lipner RS. The association between physicians’ cognitive skills and quality of diabetes care. Acad Med. 2012;87(2):157-163. doi: 10.1097/ACM.0b013e31823f3a57.

21. Hess BJ, Weng W, Lynn LA, Holmboe ES, Lipner RS. Setting a fair performance standard for physicians’ quality of patient care. J Gen Intern Med. 2011;26(5):467-473. doi: 10.1007/s11606-010-1572-x.

22. Weng W, Hess BJ, Lynn LA, Holmboe ES, Lipner RS. Measuring physicians’ performance in clinical practice: reliability, classification accuracy, and validity. Eval Health Prof. 2010;33(3):302-320. doi: 10.1177/0163278710376400.

23. Kolen MJ, Brennan RL. Test Equating, Scaling, and Linking: Methods and Practices. 2nd ed. New York, NY: Springer; 2004.

24. NCHS urban-rural classification scheme for counties. CDC website. cdc.gov/nchs/data_access/urban_rural.htm. Updated June 1, 2017. Accessed January 8, 2018.

25. Rittenhouse DR, Casalino LP, Gillies RR, Shortell SM, Lau B. Measuring the medical home infrastructure in large medical groups. Health Aff (Millwood). 2008;27(5):1246-1258. doi: 10.1377/hlthaff.27.5.1246.

26. General Atlas rates: Medicare mortality rates. Dartmouth Atlas Project website. atlasdata.dartmouth.edu/static/general_atlas_rates#mortality. Accessed August 26, 2019.

27. Table S1901: income in the past 12 months (in 2012 inflation-adjusted dollars): 2012 American Community Survey 1-year estimates. US Census Bureau website. factfinder.census.gov/bkmk/table/1.0/en/ACS/12_1YR/S1901/0100000US. Accessed November 1, 2017.

28. Papke LE, Wooldridge JM. Panel data methods for fractional response variables with an application to test pass rates. J Econom. 2008;145(1-2):121-133. doi: 10.1016/j.jeconom.2008.05.009.

29. Papke LE, Wooldridge JM. Econometric methods for fractional response variables with an application to 401(k) plan participation rates. J Appl Econ (Chichester Engl). 1996;11(6):619-632. doi: 10.1002/(SICI)1099-1255(199611)11:6<619::AID-JAE418>3.0.CO;2-1.

30. Ai C, Norton EC. Interaction terms in logit and probit models. Econ Lett. 2003;80(1):123-129. doi: 10.1016/S0165-1765(03)00032-6.

31. Karaca-Mandic P, Norton EC, Dowd B. Interaction terms in nonlinear models. Health Serv Res. 2012;47(1, pt 1):255-274. doi: 10.1111/j.1475-6773.2011.01314.x.

32. Buis ML. Stata tip 87: interpretation of interactions in non-linear models. Stata J. 2010;10(2):305-308. doi: 10.1177/1536867X1001000211.

33. margins—marginal means, predictive margins, and marginal effects. Stata website. stata.com/manuals14/rmargins.pdf. Accessed January 9, 2018.

34. margins, contrast—contrasts of margins. Stata website. stata.com/manuals14/rmarginscontrast.pdf. Accessed January 9, 2018.

35. Haber RJ, Avins AL. Do ratings on the American Board of Internal Medicine Resident Evaluation Form detect differences in clinical competence? J Gen Intern Med. 1994;9(3):140-145. doi: 10.1007/bf02600028.

36. Pham HH, Bernabeo EC, Chesluk BJ, Holmboe ES. The roles of practice systems and individual effort in quality performance. BMJ Qual Saf. 2011;20(8):704-710. doi: 10.1136/bmjqs.2010.048991.

37. Tomoaia-Cotisel A, Scammon DL, Waitzman NJ, et al. Context matters: the experience of 14 research teams in systematically reporting contextual factors important for practice change. Ann Fam Med. 2013;11(suppl 1):S115-S123. doi: 10.1370/afm.1549.

38. National diabetes statistics report, 2017: estimates of diabetes and its burden in the United States. CDC website. cdc.gov/diabetes/pdfs/data/statistics/national-diabetes-statistics-report.pdf. Accessed November 1, 2017.

39. High blood pressure facts. CDC website. cdc.gov/bloodpressure/facts.htm. Updated November 30, 2016. Accessed November 1, 2017.

40. Gray B, Reschovsky J, Holmboe E, Lipner R. Do early career indicators of clinical skill predict subsequent career outcomes and practice characteristics for general internists? Health Serv Res. 2013;48(3):1096-1115. doi: 10.1111/1475-6773.12011.

Related Videos
dr carol regueiro
dr carol regueiro
dr carol regueiro
Corey McEwen, PharmD, MS
dr linda bosserman
dr andrew leitner
dr joseph alvarnas
Screenshot of an interview with A. Mark Fendrick, MD
Screenshot of Scott Soefje, PharmD, BCOP, during  a video interview with. the AJMCtv logo in the top corner
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo