Publication
Article
Author(s):
This study measured breast cancer screening practice patterns in relation to evidence-based guidelines and accountability metrics, and found closer alignment is needed for providing patient-centered care.
ABSTRACTObjectives: Breast cancer screening guidelines and metrics are inconsistent with each other and may differ from breast screening practice patterns in primary care. This study measured breast cancer screening practice patterns in relation to common evidence-based guidelines and accountability metrics.
Study Design: Cohort study using primary data collected from a regional breast cancer screening research network between 2011 and 2014.
Methods: Using information on women aged 30 to 89 years within 21 primary care practices of 2 large integrated health systems in New England, we measured the proportion of women screened overall and by age using 2 screening definition categories: any mammogram and screening mammogram.
Results: Of the 81,352 women in our cohort, 54,903 (67.5%) had at least 1 mammogram during the time period, 48,314 (59.4%) had a screening mammogram. Women aged 50 to 69 years were the highest proportion screened (82.4% any mammogram, 75% screening indication); 72.6% of women at age 40 had a screening mammogram with a median of 70% (range = 54.3%-84.8%) among the practices. Of women aged at least 75 years, 63.3% had a screening mammogram, with the median of 63.9% (range = 37.2%-78.3%) among the practices. Of women who had 2 or more mammograms, 79.5% were screened annually.
Conclusions: Primary care practice patterns for breast cancer screening are not well aligned with some evidence-based guidelines and accountability metrics. Metrics and incentives should be designed with more uniformity and should also include shared decision making when the evidence does not clearly support one single conclusion.
Am J Manag Care. 2017;23(1):35-40
Take-Away Points
This study explores the inherent challenges of shifting primary care breast cancer screening practice in response to evidence-based guidelines while also attending to accountability and performance measures.
Breast cancer screening practices are often debated in clinical practice, public health, and national dialogues. A multitude of medical professional organizations endorse specific sets of breast cancer screening guidelines, such as that of the United States Preventive Services Task Force (USPSTF),1 the American Cancer Society (ACS),2 and the American College of Radiology (ACR),3 among others. These guidelines vary to some extent, and represent different interpretations of largely the same evidence base. At the same time, in an effort to improve quality of care delivery and hold organizations accountable for the dollars they spend, a number of quality measures are in use by healthcare organizations and practices, such as Healthcare Effectiveness Data and Information Set (HEDIS) measures from the National Quality Forum (NQF)4 and similar measures used by CMS.5 These measures are often tied to fiscal and other organizational incentives through contractual payment mechanisms.
Currently, tested models of care delivery, such as accountable care organizations (ACOs)6 and patient-centered medical homes, are employing measures for “best practices” and for qualification/recognition, as well to establish whether bonuses or savings will be paid out. Specifically, in many of these risk-based contracts, if the provider entity shows worse quality on a number of established quality measures—often including cancer screening—provider organizations may not be able to participate in any shared savings generated by new payment models.7 In other pay-for-performance models, set thresholds for improvement in certain quality measures result in higher payment bonuses.8 Further, practices often have contracts with private insurers that specify guidelines of care for breast cancer screening, such as annual screening in women aged 40 to 49 years. Breast cancer screening is one of the measures that healthcare systems and practices are typically required to report. Given that physicians and practices seek to provide guideline-based care and achieve quality/accountability metrics, understanding the alignment of these measures is important for patient care.
Table 1 presents a summary of common evidence-based guidelines and quality measures for breast cancer screening.1-3,9,10 Major evidence-based guidelines that are endorsed or disseminated by professional organizations include the USPSTF, which, since 2009, has recommended that biennial screening typically begin at age 50 through age 74 for women of average breast cancer risk—although for women aged 40 to 49, a risk- and preference-based decision should be made.1-3,9,10 The most recently released USPSTF guidelines (2016) continue to support the previous 2009 screening guidelines, but include a statement that there is little evidence at this time on the effectiveness of digital tomosynthesis (3D mammography) or additional screening for women with dense breasts.10 In contrast, the ACS and ACR recommend that average-risk women begin screening at age 40, and continue annually, without a pre-specified ending age.2,3
Measures for quality, payment, or other forms of accountability are usually derived from the same evidence as the professional organizations, but are not always aligned with guidelines. For example, prior to 2014, the HEDIS and ACO breast cancer screening metric was based on women initiating screening at age 40 and continuing until age 69 every 2 years, and also included any mammogram, not necessarily with a screening indication.6 These practice quality measures were instituted at many practices, and may have provided an impetus or financial incentive to achieve the best measures possible by adhering to their screening parameters. At the same time, women receive recommendations on breast cancer screening not only from primary care, but also from radiology and obstetrics and gynecology practices, which typically recommend an annual interval.
From 2009 to 2013, providers wanting to adopt USPSTF breast cancer screening guidelines would not be able to fully do so and perform well using HEDIS, ACO, and NCQA measures. This is due to the number of women aged 40 to 49 years who may choose not to be screened per USPSTF guidelines/recommendations, and thus, they would appear “not current,” which could reduce the rates on which practice performance was measured. On the other hand, providers using ACS or ACR breast cancer screening guidelines would be simultaneously concordant with the accountability metrics of HEDIS and others. In 2014, HEDIS and ACO measures were changed for breast cancer screening such that they were concordant with that of USPSTF for a starting age of 50 at 2-year screening intervals until age 74.10 This environment of heterogeneous guidelines, differential uptake of guidelines over time, and concurrent goals of providing patient-centered care, as well as meeting practice-based metrics, can potentially create unavoidable discordance of breast cancer screening practices within the range of recommendations.
The objective of this study was to examine breast cancer screening practice patterns to assess concordance with evidence-based guidelines and accountability metrics for primary care within a sample of practices from 2 large regional healthcare systems.
METHODS
Study Population and Setting
This study was conducted within one of the consortium member networks of the National Cancer Institute (NCI)—funded Population-based Research Optimizing Screening for Personalized Regimens (PROSPR),11 which is focused on breast cancer screening. The PROSPR Research Center (PRC) includes data on breast cancer screening within the primary care populations of the Dartmouth-Hitchcock regional network in New Hampshire and the Brigham and Women’s Hospital system in greater Boston and surrounding areas in Massachusetts. Our PRC comprises 37 primary care facilities and 10 radiology facilities in the bi-state region, and includes data from January 2011 through September 2014 on a primary care population (the PRC cohort) of women between the ages of 30 and 89 years, who had at least 1 primary care visit in the past 24 months within our respective healthcare systems.
To assess a biennial screening interval, the analysis was restricted to those women who became part of our PRC cohort anytime between January 1, 2011, and June 30, 2012, which allowed at least 24 months (plus an additional 3 months to account for the scheduling and completion of a mammography exam) from the time a woman became a part of the PRC cohort until September 2014 (n = 83,725). We excluded from the study, 16 primary care facilities with fewer than 100 women visiting during the study period. This resulted in a final cohort of 81,352 women among the 21 primary care facilities. The study was approved prior to data collection by the Institutional Review Boards of Dartmouth College and Brigham and Women’s Hospital.
Data Sources and Collection
Actual screening patterns were measured using data from our PRC database. Data are routinely collected for our PRC cohort on breast imaging (including breast screening exams within 2 years prior to becoming part of the PRC cohort), follow-up, breast pathology, breast cancer diagnosis, and vital status. Specifically, for this study, we used the following data elements: entry into the PRC cohort, the primary care facility, age at PRC cohort entry, date and exam indication for 2D and 3D breast images (mammograms), age at mammography, and vital status. Data sources used included the electronic health record (EHR), radiology information system databases, and institutional cancer registries. All data from our PRC were systematically mapped into a single database with common data elements.
RESULTS
Analysis
Using the PRC database, we categorized 2 measures for determining the receipt of breast cancer screening based on a woman’s breast images that occurred during the study period. The first screening measure category included receipt of any mammogram (screening or diagnostic), which corresponds to HEDIS and ACO metrics. The second category was receipt of a mammogram that specifically indicated that the purpose of the exam was screening, which corresponds to USPSTF guidelines. Additionally, the woman’s receipt of a screening mammogram within 2 years of her PRC cohort entry date was included in the screening measure definitions. The age at PRC cohort entry was categorized as follows (in years): under 40, 40 to 44, 45 to 49, 50 to 69, 70 to 74, 75 to 79, 80 to 84, and 85 or older. We reported the overall population age distribution by age groups and provided the number and percent of each population age group’s screening measures.
For analyzing the proportion of women who initiated and continued screening, including the time interval between screens (Figure and Table 2), we used the USPSTF criteria category (receipt of a mammogram indicated as screening). We assessed the percent of women initiating screening at the age of 40 and at the age of 50, and continued screening at age 75 or older for each primary care facility. We defined the age-40 cutoff with a 27-month window to account for time to schedule and complete an exam. For example, if a woman received a screening mammogram up to 27 months following her 40th birthday, she would be considered as initiating screening at age 40. We summarized the frequency and proportion of women within 5-year age categories for each of the screening intervals: annual, biennial, and over 2 years, in addition to the median and interquartile range (IQR) of screening interval in days. The screening intervals were defined as 9 to 18 months for annual, 18 to 27 months for biennial, and over 27 months for more than 2 years.
RESULTS
Our study cohort included 81,352 total women, with 38,897 (47.8%) aged 50 to 74 years (Table 3). Overall, 54,903 of 81,352 (67.5%) had a mammography exam of any type (screening or diagnostic), and 48,314 of 81,352 (59.4%) had a screening mammogram during our study period, with the majority of mammography exams occurring within the 50-to-69 age group (82.4% and 75%, respectively) (Table 3). Seventy percent of women aged 40 to 44 had a screening mammogram and 77.3% had a mammogram of any type (screening or diagnostic). Among the older age groups, the proportion with a mammogram steadily decreased; however, over a quarter (27.5%) of women 85 years or older had received a screening mammogram (Table 3).
Examining the proportion of women by age categories, those with a screening mammogram at age 40 was 72.6% overall, and across the primary care facilities, the median was 70% with a range of 54.3% to 84.8% (Figure). For women who initiated with a screening mammogram at age 50, the overall proportion with mammography was 75.5%, with a median of 77.2% and a range of 51.4% to 91.8% across the primary care facilities. Sixty-three percent of women 75 years or older continued screening, and among the primary care facilities, the median was 63.9% with a range of 37.2% to 78.3% (Figure).
Of the women undergoing screening mammography (n = 32,275), the vast majority had a mammogram on an annual basis (79.5%) (Table 2). As the age category increased, the proportion of screening women in the annual screening group also increased (Table 2). Median screening intervals overall and by age group reflect the high proportion of annual screeners (Table 2). The median (IQR) screening interval among those who screened annually was 379 days (IQR, 369-419; data not shown).
DISCUSSION
This study explores the inherent challenges of shifting breast cancer screening practice in response to evidence-based guidelines, while also attending to accountability and performance measures. We found that screening mammography—in other words, the purpose of the exam was for screening—appears to be largely initiated in the 40s age group, and stays fairly consistent until later life. For older women (ie, 75 years or older), there is a notable drop-off in screening, but well over half the women in this age group were screened, and well over a third of women aged 80 to 84 were screened. Among women who have been screened, the vast majority do so at an annual interval across all ages, with median screening interval bearing that out.
Using the definition that receipt of any mammogram (screening or diagnostic) can be considered having been screened—such as in HEDIS and ACO measures—for a 2-year period, the proportion of women screened appears much higher. This work suggests that practice patterns are not shifting with the USPSTF guidelines, and that the heterogeneity of accountability and performance measurement may be a barrier to change given the lack of uniformity of payment incentives, performance measures, and evidence-based guidelines.
By examining which breast cancer screening guidelines primary care practices follow within the complex landscape of evidence-based guidelines, and the increasing need for accountability metrics, we found that ACS and ACR guidelines were followed, as was the age of initiation for the 2012 HEDIS measure. Notably, the USPSTF and HEDIS (2014 edition) starting age and interval were not well followed in practice, but would be met by default because the measure of biennial mammography would be satisfied by women screening annually; further, screening at age 50 would be met by women who had started in their 40s, and there is no penalty for overscreening. The USPSTF and HEDIS stopping ages also were not reflected in the screening practice patterns, because a large proportion of women continued on with screening after age 75, which is in alignment with ACR and ACS guidelines.
Few studies have examined concordance of practice-level breast cancer screening guidelines and patterns in relation to prevailing evidence-based recommendations and accountability metrics. Several studies, however, have examined patient12-14 and provider15,16 adoption of breast cancer screening guidelines—particularly those related to the 2009 USPSTF recommendations, which were changed from the grade B 2002 recommendation of screening mammography every 1 to 2 years, starting at age 40. In a 2009 western Washington study of 18 providers, one study reported 3 main reasons for low intention to change breast cancer screening practices in response to the USPSTF recommendations: lack of confidence in the evidence, limited availability of low-cost mammography, and desire to offer more services.13 Although the representativeness of this study is unknown for US primary care providers, it highlights the possibility that factors driving clinical decision making by providers may or may not be congruent with prevailing national recommendations and delivery models.
In addition to recommendations from medical organizations, new care delivery models, such as the patient-centered medical home, seek to employ standardized measures to track cost, use, and clinical quality in order to promote benefits to patient care,17 and for fiscal incentives.18 For the PCMH model, breast cancer screening follows the HEDIS and NCQA 2012 measure set, which are fundamentally derived from the same evidence base as other breast cancer screening recommendations. Notably, care delivery models and incentives are typically based in primary care and not directly in radiology practices, which play a key role in breast cancer screening practices and often use ACR and/or ACS guidelines. Differing guidelines for breast cancer screening allow for variation in adoption based on clinical specialty, practice and provider preferences, institutional financial incentives, and regional norms. For breast cancer screening, lack of uniformity to the guidelines and measures almost ensures a heterogeneous approach to breast cancer screening in clinical practice. Heterogeneity in guidelines may create conundrums for practices and their patients and providers, especially if providers within practices and/or patients differ in which they choose to follow.
This issue is not limited to breast cancer screening, but is seen in other areas of clinical practice, with hypertension being a prime example. There is currently controversy over the appropriate cut point for the target systolic blood pressure (SBP) in individuals over 60 years of age.19 Specifically in question, is whether the target SBP should be 150 or 140 mm Hg for this age group,20 with some guidelines set at 150 mm Hg; however, the HEDIS and CMS measures still have the goal of 140 mm Hg.19 Given the confusion and the lagging change in high-stakes measures for payment and shared results, it may be no surprise that providers are reticent to change practice, as has been demonstrated.20
Organizations and practices increasingly have the ability to track actual practice patterns for breast cancer screening through both claims and EHR-based measures, and are, in fact, incentivized to do so for accountability measures through use of EHRs and claims data. Assessing, at the practice and provider levels, how well breast cancer screening patterns match organizational and/or provider-specific goals based on the guidelines and measures of choice, is an important component of streamlining how breast cancer screening is delivered. However, incorporating women’s screening preferences in quality of care metrics is essential for ensuring high-quality patient-centered care.
Several population-based studies, based on national survey data, have examined breast cancer screening in relation to the USPSTF recommendations. In a Medicare sample from 2005 to 2010, an abrupt decline in screening mammography use (—4.3%) was noted in 2010 relative to 2009.21 In contrast, among 5.5 million privately insured women aged 40 to 64 from 2006 to 2011, a small decrease was seen in screening among women aged 40 to 49 years 2 months after the USPSTF guidelines; but by 2 years following the recommendations, a small increase was seen in screening mammography rates across all ages.22 No changes in screening patterns were observed after the 2009 USPSTF guidelines in a 2006 to 2010 Medical Expenditure Panel Survey of women 40 years or older.23 Our studies are in accord with those reporting no effect of the USPSTF guidelines on practice patterns, and extend the generalizability of that finding to women within primary care and for the added years of observation beyond 2010.
Recent studies of the 2009 USPSTF recommendation changes also show lack of awareness of both the guidelines and risks and benefits of screening by women, leading to resistance in adopting new guidelines.1,3-14 Women’s preferences and expectations may be shaped by media, providers, and social networks, and may not match evolving adoption of new breast cancer screening guidelines by their providers and practices. If practices hope to deliver the best patient-centered care, then cultivating the closest alignment of guidelines, accountability measures, and preferences in breast cancer screening should be a key objective.
Strengths and Limitations
A strength of this study was the focus on practice-level patterns, of which there is little reported in the literature. However, a limitation is that we did not capture patient preferences, and thus, we cannot account for their potential role in the patterns observed. A multi-level study is needed to simultaneously examine patient-, provider-, and practice-level breast cancer screening patterns and preferences—ideally, in a variety of healthcare settings and locales, which may capture important warranted variation in processes of care.
CONCLUSIONS
There is a complex landscape of breast cancer screening with its host of recommendations, guidelines, and measures. Adding to that complexity is the multilevel nature of healthcare systems—with patients, providers, practices, and networks—all with their own objectives, preferences, and accountabilities. Increasing the complexity further, is the diversity of clinical settings and professional lenses through which breast cancer screening may be viewed. For example, general internal medicine and family practice may differ from radiology in the application of evidence, choice of guidelines, or need for specific accountability measures. The situation is similar with the hypertension recommendations, in which general internal medicine, family practice, and cardiology may not all endorse or use the same SBP measure. With heterogeneity in clinical guidelines across specialties—despite their grounding in the same evidence base and differing priorities within the multi-level healthcare environment (patient-centered care vs fiscally motivated metrics)—it is not surprising that there is room for improvement in the provision of patient-centered care related to breast cancer screening. Additional efforts are needed to reconcile conflicting guidelines and to harmonize accountability and performance measures with evidence-based guidelines.
Author Affiliations: Department of Biomedical Data Science (TO, JW, MG, SP), and Norris Cotton Cancer Center (TO, ANAT), and The Dartmouth Institute for Health Policy and Clinical Practice (TO, CB, ANAT), Geisel School of Medicine at Dartmouth, Lebanon, NH; Division of General Internal Medicine, Brigham and Women’s Hospital (JSH, AB, KH), Boston, MA; Department of Medicine, Dartmouth Hitchcock Medical Center (CB), Lebanon, NH.
Source of Funding: The design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript was supported as part of the National Cancer Institute—funded consortium, Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) (U54 CA163307).
Author Disclosures: Drs Tosteson and Haas received a grant from the National Cancer Institute. Dr Bitton serves as a part-time advisor to the Center for Medicare and Medicaid Innovation (CMMI) on their comprehensive primary care initiative (no involvement on setting quality metrics for CMS or CMMI). The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (AB, MG, JSH, KH, TO, SP, ANAT); acquisition of data (MG, JSH, KH, TO, SP); analysis and interpretation of data (AB, CB, MG, JSH, TO, SP, ANAT, JW); drafting of the manuscript (AB, CB, MG, JSH, TO, JW); critical revision of the manuscript for important intellectual content (AB, CB, MG, JSH, TO, ANAT); statistical analysis (JW); provision of patients or study materials (MG, JSH); obtaining funding (JSH, TO, ANAT); administrative, technical, or logistic support (MG, JSH, KH, TO); and supervision (TO).
Address Correspondence to: Martha Goodrich, MS, Geisel School of Medicine at Dartmouth, 1 Medical Center Dr, Lebanon, NH 03756. E-mail: Martha.e.goodrich@dartmouth.edu.
REFERENCES
1. US Preventive Services Task Force. Screening for breast cancer: U.S. Preventive Services Task Force Recommendation statement. Ann Intern Med. 2009;151(10):716-726. doi: 10.7326/0003-4819-151-10-200911170-00008.
2. American Cancer Society recommendations for early breast cancer detection in women without breast symptoms. American Cancer Society website. http://www.cancer.org/cancer/breastcancer/moreinformation/breastcancerearlydetection/breast-cancer-early-detection-acs-recs. Updated October 20, 2015. Accessed September 10, 2015.
3. ACR practice parameter for the performance of screening and diagnostic mammography. American College of Radiology website. https://www.acr.org/~/media/3484ca30845348359bad4684779d492d.pdf. Accessed September 3, 2015.
4. Breast cancer screening: percentage of women 50 to 74 years of age who had a mammogram to screen for breast cancer. Agency for Healthcare Research and Quality website. https://www.qualitymeasures.ahrq.gov/summaries/summary/48809. Published November 2014. Accessed September 10, 2015.
5. Measure search [breast cancer screening]. National Quality Forum website. http://tinyurl.com/hru6hpd. Accessed December 12, 2016.
6. Accountable care organizations (ACO). CMS website. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/ACO/index.html?redirect=/aco. Published January 6, 2015. Accessed September 10, 2015.
7. Edwards ST, Bitton A, Hong J, Landon BE. Patient-centered medical home initiatives expanded in 2009-13: providers, patients, and payment incentives increased. Health Aff (Millwood). 2014;33(10):1823-1831. doi: 10.1377/hlthaff.2014.0351.
8. Paustian ML, Alexander JA, El Reda DK, Wise CG, Green LA, Fetters MD. Partial and incremental PCMH practice transformation: implications for quality and costs. Health Serv Res. 2014;49(1):52-74. doi: 10.1111/1475-6773.12085.
9. HEDIS 2014, volume 2: 2014 summary table of measures, product lines and changes. National Committee for Quality Assurance website. http://www.ncqa.org/Portals/0/HEDISQM/HEDIS2014/List_of_HEDIS_2014_Measures.pdf. Accessed October 10, 2015.
10. Final recommendation statement—breast cancer: screening. US Preventive Services Task Force. https://www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/breast-cancer-screening1. Accessed December 12, 2016.
11. Beaber EF, Kim JJ, Schapira MM, et al; Population-based Research Optimizing Screening through Personalized Regimens Consortium. Unifying screening processes within the PROSPR consortium: a conceptual model for breast, cervical, and colorectal cancer screening. J Natl Cancer Inst. 2015;107(6):djv120. doi: 10.1093/jnci/djv120.
12. Allen JD, Bluethmann SM, Sheets M, et al. Women’s responses to changes in U.S. Preventive Task Force’s mammography screening guidelines: results of focus groups with ethnically diverse women. BMC Public Health. 2013;13:1169. doi: 10.1186/1471-2458-13-1169.
13. Coronado GD, Gutierrez JM, Jhingan E, Angulo A, Jimenez R. Patient and clinical perspectives on changes to mammography screening guidelines. Breast J. 2014;20(1):105-106. doi: 10.1111/tbj.12219.
14. Kiviniemi MT, Hay JL. Awareness of the 2009 US Preventive Services Task Force recommended changes in mammography screening guidelines, accuracy of awareness, sources of knowledge about recommendations, and attitudes about updated screening guidelines in women ages 40-49 and 50+. BMC Public Health. 2012;12:899. doi: 10.1186/1471-2458-12-899.
15. Corbelli J, Borrero S, Bonnema R, et al. Physician adherence to U.S. Preventive Services Task Force mammography guidelines. Womens Health Issues. 2014;24(3):e313-e319. doi: 10.1016/j.whi.2014.03.003.
16. Haas JS, Sprague BL, Klabunde CN, et al. Provider attitudes and screening practices following changes in breast and cervical cancer screening guidelines. J Gen Intern Med. 2016;31(1):52-59. doi: 10.1007/s11606-015-3449-5.
17. Rosenthal MB, Abrams MK, Bitton A; Patient-Centered Medical Home Evaluators’ Collaborative. Recommended core measures for evaluating the patient-centered medical home: cost, utilization, and clinical quality. The Commonwealth Fund website. http://www.commonwealthfund.org/~/media/Files/Publications/Data%20Brief/2012/1601_Rosenthal_recommended_core_measures_PCMH_v2.pdf. Published May 2012. Accessed December 2016.
18. Integrated Healthcare Association. IHA pay for performance measure set strategy: 2012-2015. http://128.121.107.205/pdfs_documents/p4p_california/2012_2015P4PMeasureSetStrategy.pdf.Accessed September 10, 2016.
19. Peterson ED, Gaziano JM, Greenland P. Recommendations for treating hypertension: what are the right goals and purposes? JAMA. 2014;311(5):474-476. doi: 10.1001/jama.2013.284430.
20. Snipelisky D, Waldo O, Burton MC. Clinical diagnosis and management of hypertension compared with the Joint National Committee 8 panelists’ recommendations. Clin Cardiol. 2015;38(6):333-343. doi: 10.1002/clc.22393.
21. Sharpe RE Jr, Levin DC, Parker L, Rao VM. The effect of the controversial US Preventive Services Task Force recommendations on the use of screening mammography. J Am Coll Radiol. 2013;10(1):21-24. doi: 10.1016/j.jacr.2012.07.008.
22. Wang AT, Fan J, Van Houten HK, et al. Impact of the 2009 US Preventive Services Task Force guidelines on screening mammography rates on women in their 40s. PloS One. 2014;9(3):e91399. doi: 10.1371/journal.pone.0091399.
23. Howard DH, Adams EK. Mammography rates after the 2009 US Preventive Services Task Force breast cancer screening recommendation. Prev Med. 2012;55(5):485-487. doi: 10.1016/j.ypmed.2012.09.01