Publication
Article
The American Journal of Managed Care
Author(s):
Objective: To describe the systems strategies used to reduce failures in delivery of breast and cervical cancer screening services in HMOs with high performance rates for these services.
Study Design: Multiple case study.
Participants and Methods: Seven HMOs participated in an assessment of their breast and cervical cancer screening policies and procedures. Current clinical practice guidelines were analyzed, and key informants were interviewed about organizational policies and procedures that ensure initial screening and follow-up of abnormal results. Data were analyzed across plans for several theoretically relevant domains, including leadership and policies, clinical decision support, delivery system design, clinical information systems, and patient self-management support.
Results: Practice guidelines were fundamentally similar across plans for both cancer screenings, although operationalization of risk and formatting of the written documents differed. These plans adopted a wide array of strategies, particularly in the clinical decision support, clinical information systems, and patient self-management support domains, but there is room for improvement. Differences among plans and between strategies for breast and cervical cancer screening provide new understanding of how to approach this problem.
Conclusions: Organizations seeking to improve performance of breast and cervical cancer screening should consider multiple strategies aimed at multiple targets and should ensure that strategies used for one type of cancer are considered for others.
(Am J Manag Care 2003;9:745-755)
The past 2 decades have witnessed widespread national and local efforts to improve the performance of important clinical preventive services.1 A common approach to improvement has been the development and dissemination of clinical guidelines, such as those for breast and cervical cancer screening.2-6 Although evidence-based guidelines are clearly an important starting point, they are insufficient by themselves. 7,8 Variation and inconsistency in performance remain challenges for clinicians, medical groups, and health plans.9
An array of strategies has been shown to improve specific services in clinical practice,10,11 including policy-level strategies to increase access,12,13 practice - level strategies such as patient and physician reminders,14,15 and motivational strategies for individual patients and clinicians.16,17 Recent analyses10,11,18- 20 emphasize the need to implement multiple strategies.
Some HMOs have demonstrated leadership in preventive services such as breast and cervical cancer screening, conducting research trials to determine best practices21-24 and building model programs.25-27 Given that most HMOs already cover screening services, it can be valuable to understand the relevant policies and practices used by plans with high screening rates. This article describes a qualitative investigation of 7 health plans with above-average success in implementing breast and cervical cancer screening among their members, and it addresses the following questions:
1. What types of strategies are used by these high-performing plans, and how much variation is there among them?
2. Are implementation strategies for breast cancer screening different than those for cervical cancer screening?
3. What gaps exist between the organizational strategies of these plans and the literature?
Information derived from such a descriptive case study approach can be of value to other HMOs, medical groups, purchasers, clinicians, and policymakers as examples of approaches associated with benchmark performance.
METHODS
Study Overview and Participants
The Cancer Research Network (CRN) is a collaboration of HMOs funded by the National Cancer Institute to increase the effectiveness of preventive, curative, and supportive interventions for major cancers. Detecting Early Tumors Enables Cancer Therapy (DETECT), 1 of 3 original CRN research projects, is investigating the reasons why any individual with prepaid access to preventive services should be diagnosed as having late-stage breast cancer or invasive cervical cancer.
The DETECT investigators systematically assessed policies and procedures for breast and cervical cancer screening in a convenience sample of 7 geographically diverse, nonprofit plans. Table 1 summarizes selected characteristics of these plans. The 1999 Health Plan Employer Data and Information Set (HEDIS)28 rates of these 7 plans29 for breast and cervical cancer screening range from 76.1% to 81.5% (national average, 74%) for mammography screening and from 77.0% to 84.4% (national average, 73%) for Papanicolaou testing.
Guiding Framework
Ensuring Quality Cancer Care
The Institute of Medicine report 30 addressed the need for quality cancer screening as part of the cancer care continuum. We conceptualized this continuum as a progression of types of care with equally important transitions between them (Figure 1). Each type of care and transition is subject to failure, some of which is the responsibility of providers, some of patients, and some of the delivery system. To improve screening rates and outcomes, it is important to identify potential problems in types of care and transitions.
Domains of inquiry for this research were guided by organizational theory and models8,32-34 and by empirical evidence on interventions to improve provider performance.14,15,35 The Chronic Care Model36,37 articulates strategies at several levels (plan, group, and practice)38 that target plan leadership, clinicians, and patients to achieve the goal of productive interactions and encounters. Figure 2 depicts the Chronic Care Model strategies investigated in the present study.
Data Collection
Data were obtained from a content analysis of each plan's breast and cervical cancer screening guidelines (fall 1999) and a structured survey of plan key informants. The survey instrument was developed by a working group of plan investigators (all authors) and was pilot tested at 2 nonparticipating CRN health plans. The survey was approved by the institutional review boards of all participating plans. Use of an interviewer's manual, with question-by-question instructions, ensured consistency in data collection, which occurred from December 1999 through February 2000. Investigators at the 7 plans conducted interviews with the individuals most knowledgeable about each topic for the group- or staff-model components. Four plans reported <10 key informants, 1 plan reported 10 to 20, and 2 plans reported 21 to 25.
Analyses
Two independent investigators (K.V.G. and J.G.Z.) analyzed and synthesized the content of the screening guidelines. Plan investigators reviewed the analyses of their own guidelines for accuracy, blinded to the other plans' analyses.
The lead author (K.V.G.) analyzed the key informant data from all 7 plans using qualitative analytical strategy39 and techniques.40 Similarities and differences across plans were summarized for each survey domain. Using a process of constant comparison,41 the data were then analyzed for themes in terms of organizational strategy, target (eg, clinician or patient), and type of screening (breast or cervical cancer). Leadership was assessed primarily at the plan level.
RESULTS
Findings are presented according to the domains identified in Figure 2, highlighting variations among plans and between the 2 types of screening.
Leadership/Policies/Processes
Research Commitment.
In the past 5 years, 6 of the 7 plans carried out public domain research related to breast cancer screening that was funded by external sources. Only 2 plans did so for cervical cancer. Breast cancer studies addressed improving mammography technology, motivational intervention trials, guideline adherence (observational studies), and follow-up of abnormal results. Cervical cancer studies investigated optimal screening intervals, efficacy of screening in older women, human papilloma virus testing, and adherence to guidelines.
Performance Standards.
All of the plans participate in HEDIS,42 and all are accredited by the National Committee for Quality Assurance. Internally developed or driven standards exist, but they vary in point of origination and frequency of monitoring. Other strategies for assessing performance are reported under quality control.
Financial and Other Incentives.
All of the plans either reward or recognize clinicians or practices for high performance rates of breast and cervical cancer screening. Only 1 plan reported financial incentives for individual clinicians, whereas 4 reported them for practices or clinics as a whole. No plan recognizes individual clinicians for their personal performance, but 3 plans recognize practices that achieve good rates.
Delivery System Design
Service Arrangements.
Overall, screening services are ordered or provided through primary care providers in a decentralized process. One exception is a plan that has a comprehensive centralized breast cancer screening program with dedicated facilities and staff. Plans vary with respect to departments that order the screens: 6 plans reported that at least 50% of Papanicolaou smears are performed by 1 department (with 5 naming obstetrics/gynecology), and 4 plans reported that at least 50% of mammograms are ordered by 1 department (2 each naming internal medicine and obstetrics/gynecology).
Processing and interpretation are more centralized, particularly for Papanicolaou smears. Six of the 7 plans reported that 1 laboratory performs all of the cytologic services. All of the plans reported that 1 group has responsibility for quality control. In contrast, plans reported 4 to 35 plan-owned mammogram facilities (6 plans indicated that 1 group is responsible for quality control at all facilities), and 6 plans use contract facilities for some mammograms. Four of these 6 plans could not readily provide information about these contracted facilities, such as the number of radiologists who read mammograms.
Quality Control and Improvement.
In the past 5 years, all of the plans emphasized quality improvement activities that consider screening performance and technical quality. All of the plans provide periodic feedback to clinicians on rates of breast and cervical cancer screening, although they are more likely to provide the individual clinician (panel) rates for cervical cancer screening (5 plans for cervical cancer and 4 plans for breast cancer) and the facility (all patients) rate for breast cancer screening (3 plans for breast cancer and 2 plans for cervical cancer). All of the plans reported quality standards or processes to increase the accuracy of detection for cervical cancer (eg, limits on the number of Papanicolaou smears read per day for cytotechnologists and verification reads on reports with normal findings) and breast cancer (eg, double reads of all mammograms and reports to radiologists on cancers in performed mammography assessments). All of the plans reported efforts to improve the quality of interpretation for cervical cancer (eg, regular training and reevaluation of a minimum number of normal slides) and breast cancer (eg, feedback to radiology technicians on image quality).
Clinical Decision Support Guidelines
All of the plans have established clinical practice guidelines for cervical cancer screening and breast cancer screening (Table 2). Operationalization of risk and formatting of the written documents differ, but the guidelines are fundamentally similar across plans for each cancer with respect to technologies, screening frequency, and upper and lower age limits for each cancer. Risk level seems to be the paramount consideration for breast cancer screening recommendations, whereas age and onset of sexual activity are the priorities for cervical cancer screening recommendations.
For breast cancer, all of the guidelines address high and average risk levels and consider personal history, previous biopsy with a positive finding, and family tory in determining risk level. Operational definitions vary across plans (eg, "first-degree relative" vs "relative"). Differences in exact age and risk definitions are reflected in differing recommendations for mammography periodicity (annual or every 2 years). Only 1 plan makes mammography recommendations based solely on age. Plans also vary in whether they recommend mammography for the 40- to 49-year-old age group not at high risk (3 plans) and in whether there are upper age limits for screening (3 plans).
For cervical cancer, all 7 plans define high risk using varying operational definitions (eg, multiple sexual partners, human papilloma virus infection, smoking, and early onset of sexual intercourse), but risk level does not affect recommended periodicity in 2 plans. All of the plans emphasize age more prominently than risk factors. Five plans address whether to screen women older than 65 years, and 6 address screening after hysterectomy.
Plans develop breast and cervical cancer screening guidelines similarly, with multidisciplinary representation, clinician input, and heavy reliance on evidence. All of the plans expend considerable effort constructing and revising guidelines. Within a plan, review schedules tend to be the same (at least every 2 years) for both guidelines. All of the plans disseminate the guidelines to individual clinicians, with wide use of internal electronic communication (intranet) systems. Less effort is expended on training, with most plans providing it but few requiring it.
Clinical Information Systems
Most plans have systems that notify the clinician when a woman is due for screening, that notify the clinician of an abnormal test result, and that track whether recommended follow-up of an abnormal result is completed (Table 3). Plans vary in the mode of notification, such as cueing via the visit registration slip (for patients with appointments) vs distributing lists of overdue patients (for all patients, not conditional on a scheduled appointment). Each plan uses the same mode of notification for mammograms and Papanicolaou smears. Visit-based systems are computerized and are also used for patient notification; at 1 plan, however, this system is in place only in the obstetrics/gynecology department. For breast and cervical cancer screening, active notification is more common than is flagging the medical record. Clinicians are generally informed on a monthly basis about all their patients who have not completed recommended follow-up of an abnormal Papanicolaou smear finding. In contrast, clinicians are informed on a patient-specific, ongoing basis about patients without recommended follow- up of an abnormal mammography finding. Neither breast nor cervical cancer screening had more strategies for all 3 types of systems across plans.
Patient Self-management Support
General Health Education.
Plans reported a multifaceted, traditional approach to education on breast and cervical cancer screening, with methods such as newsletters, pamphlets, member handbooks, and posters named by all 7. Four plans use electronic media such as videotapes or audiotapes, and 3 plans use the plan Web site. More educational strategies were reported for breast cancer screening than for cervical cancer screening. Dissemination of guidelines to members is identical for breast and cervical cancer screening: 6 plans provide consumer versions to members at enrollment, 5 provide them annually, and 5 provide them on request.
Risk Assessment Surveys.
Among the 5 DETECT plans that conduct risk assessments, the most common target population is Medicare members. Information is collected at enrollment. The proportion of eligible members reached varies from <25% to >85% among the plans.
Reminders and Tracking.
Table 4 displays data on strategies for notification of members about breast and cervical cancer screening. Almost all of the plans notify women directly about the need for screening, notify women of an abnormal screening result, and ensure contact with individuals who do not complete recommended follow-up of an abnormal result within a specified time. More strategies regarding the need for screening and abnormal result notification are used for cervical cancer screening, whereas more strategies regarding failure to follow up are used for breast cancer screening. Plans with computerized systems that print on the visit slip are unlikely to use other methods of informing women that they are due for screening. They are also unlikely to use the system to notify members of an abnormal result or to pursue women who fail to follow up an abnormal result. Telephone calls about abnormal Papanicolaou smear results are generally made by the clinic where the member receives the care, whereas notification of an abnormal mammogram result is from radiology or a centralized screening program.
COMMENT
Case studies can provide valuable descriptive information about strategies used by successful organizations to meet their goals.39 Findings from this assessment confirm that the participating plans or their respective medical groups have adopted a wide range of organizational strategies to reduce the potential for failure in and between the processes of breast and cervical cancer screening, follow-up, and diagnosis. Every plan includes 1 or more strategies in each domain posited to have an effect on the healthcare team, on patients, or on their interactions (Figure 2). The variation in implementation across plans and between breast and cervical cancer screening services provides a useful illustration of the diversity of possible approaches and the opportunities for further quality improvement. Although systems strategies may be more difficult to implement in network models or fee-for-service arrangements,43 some of the strategies and combinations of strategies should be generalizable to any large medical group.
The study plans emphasize clinical decision support, clinical information systems, and patient self-management support strategies. Plans have developed and disseminated breast and cervical cancer screening guidelines for many years. Like Brown and colleagues,44 we found that the guidelines differed dramatically in length, format, and organizing principles, but substantive differences were minimal. Realistically, however, the evidence supporting training and continuing education consistently demonstrates that guidelines in and of themselves do not dramatically alter practice, and resources should be directed toward other systematic changes to ensure implementation.8,14,15,45,46
Clinical information strategies are varied and complex. Although nearly all of the plans provide either encounter reminders or lists of members needing screening (but not both), they have focused less attention on systems for tracking whether screening and follow-up have occurred. Patient self-management strategies amplify clinical information support,7,21,22,25,47-49 and most plans have implemented 1 or more strategies for reminders and tracking. Although most plans target strategies to clinicians and members, these systems are generally separate (eg, the system that reminds the clinician that a woman is due for screening does not share information with the system that reminds the patient). Plans with appointment-based notification may fail to notify women who do not access care. Plans generally notify women and their clinicians of the need for screening and of abnormal results, but there is room for improvement in the compilation of statistics for quality improvement monitoring.
The important issue of continued periodic screening (vs current prevalence of screening) was not considered in this study. Strategies to target women who have not been screened regularly or who have not been screened at all are noted in the text and in Table 3. Given the common approaches of in-reach at time of notification, medical record flag, and periodic lists to clinicians, an important missing safety net strategy may be needed for women who do not access the system at all.
With respect to notifying a woman of abnormal results, more creative options may need to be explored, including e-mail delivery for those with access to computers and increased use of certified mail and telephone counseling.
Areas less emphasized in this study are leadership strategies, including performance standards, and delivery system design. Performance standards and monitoring efforts emphasized initial screening (Figure 1). Follow-up of abnormal results may represent an opportunity for more quality improvement. For example, none of the 7 plans reported monitoring the time between interpretation of an abnormal Papanicolaou smear result and notification of patients and clinicians or the existence of a corresponding performance standard. Likewise, none of the plans reported using quality monitoring strategies or recognition incentives for appropriate (timely) follow-up of abnormal test results. Another component of DETECT is examining the issue of follow-up from the patient's perspective, and the results should shed light on potential quality improvement objectives.
All 7 plans have made both screens the topic of quality improvement. Although the literature is ambivalent about the impact of continuous quality improvement on prevention services50,51 and the feasibility of validly measuring its impact,52 some method of change management is required to improve care. These activities are complementary to the other strategies that these plans have adopted.52 For example, although systems are in place for notification of abnormal results or completion of recommended follow- up, no plan has set benchmarks for these important processes.14,53-55
HMOs have undertaken a variety of service arrangements, including level of centralization, and staffing patterns to increase access and accountability.11,24 In this study, 1 plan has centralized the screening process, and only for breast cancer. Centralization of tracking that follow-up of abnormal results has occurred is more common, notably for breast cancer screening. The approach to staffing, however, may be as important as, or interact with, centralization or decentralization of services. When members have direct access to obstetrics/gynecology offices for primary care, most Papanicolaou smears are performed there.
All of the plans reported efforts to improve the quality of interpretation for cervical cancer (eg, regular training and reevaluation of a minimum number of normal slides) and breast cancer (eg, feedback to radiology technicians on image quality). Quality of mammogram and Papanicolaou smear interpretation has been an issue of concern nationally.56-59 All 7 plans are certified by the American College of Radiology and the American College of Pathology, whose regulations specify minimum quality control activities. Plans generally did not report standards or monitoring for contracted entities, but most mammograms and Papanicolaou smears are interpreted centrally.
Differences in strategies between breast and cervical cancer screening were evident, but there was no clear emphasis on one type of screening over another. Plans reported more research emphasis on breast than cervical cancer screening. There seems to be equal lack of emphasis on internal performance standards for both types of screening. Clinical information strategies were essentially equivalent for breast and cervical cancer screening, although plans reported more systems in place for tracking patients who are non-adherent for cervical cancer screening or follow-up of abnormal results than for breast cancer screening. Patient self-management support evidenced little difference among plans, with somewhat fewer plans reporting tracking and follow-up mechanisms for non-adherent patients for cervical cancer screening than for breast cancer screening. The apparent disconnect between patient and clinician systems provides a promising opportunity for improvement.
Several lessons for other health plans, purchasers, and large medical groups can be drawn from this analysis. We found that successful organizations are applying multiple strategies in all of the domains identified through research.20 Variations in strategies across plans and between cancers illustrate the range of possibilities rather than truly divergent approaches. Specific strategies, as well as total numbers of strategies, may reflect differences in the organization of care delivery, variations in clinician types and physician specialties, and the influence of clinical champions.60 Additional performance standards, for example, on screening utilization by continuous enrollees or follow-up of abnormal results, may be required. Consideration of more centralized screening programs may be justified. Just as important, however, are advances that capitalize on existing information resources to further improve performance. In particular, systems that track follow-up of abnormal results on the clinician and patient sides could be harnessed for quality improvement. An important theme to emerge from this analysis was the room for improvement, even in these successful organizations.
This study has several limitations. The study HMOs are not a representative sample of managed care organizations, representing vanguard rather than average plans in terms of preventive services. Data collection was carried out only on the group- or staff-model component, but some of these HMOs have many members receiving services from contracted medical groups. Our focus on systems at the plan and medical group levels does not systematically capture practice-level efforts, which are important in determining clinician performance. The descriptive nature of this study does not allow us to assess the relative importance of a particular strategy. Finally, important aspects such as leadership style, competency and continuity, and other practice organization approaches43,61 were not measured. In addition, 5 of the participating plans are Kaiser Permanente regions formed by exclusive Permanente Medical Group contracts with the Kaiser Foundation Health Plan. Activities undertaken in these organizations may be more interdependent than those of the 2 independent plans. The medical groups are almost completely autonomous in terms of operations, however, and breast and cervical cancer screening policies and procedures fall within their purview. Thus, there is ample room for variations in screening policies and improvement strategies.
In summary, this study reports on the variety of strategies that organizations and practices could adopt to reduce failures in breast and cervical cancer screening and detection:
- Leadership commitment can be reflected in involvement in research, performance standards expectations, and financial and other incentives.
- Service arrangements can vary but should emphasize quality control and improvement.
- Clinical decision support strategies (guidelines and dissemination) are important in defining risk and periodicity.
- Clinical information systems (tracking processes of care) and member self-management (reminders and notification) that reinforce clinician and patient actions are important, but mode varies and vigilance about awareness and implementation is critical.
- Variable strategies should be considered for different types of screening tests and for care processes across the continuum (screening, follow- up of abnormal results, and diagnosis).
Acknowledgments
Group Health Cooperative:
Henry Ford Health Systems:
Kaiser Permanente Colorado:
Kaiser Permanente Hawaii:
Kaiser Permanente Northern California:
Kaiser Permanente Northwest:
Kaiser Permanente Southern California:
Meyers Primary Care/Fallon:
The following Cancer Research Network organizations and principal staff members participated in this study: Stephen Taplin, MD, MPH (principal investigator), and Deb Casso, MPH. Christine Cole Johnson, PhD, MPH (principal investigator); Bruce McCarthy, MD, MPH (coinvestigator); Marianne Ulcickas Yood, DSc, MPH; and Karen Wells, BS. Ned Calonge, MD, MPH (principal investigator); Kim Bischoff, MSPH; Eric France, MD; and Judy Mouchawar, MD, MPH. Thomas M. Vogt, MD, MPH (principal investigator); Joyce Gilbert, MPH; and Denise Williams, BS. Lisa Herrinton, PhD (principal investigator); Michele Manos, PhD; and Carol P. Somkin, PhD. Victor J. Stevens, PhD (principal investigator), and Sheila Weinmann, PhD. Ann M. Geiger, PhD (principal investigator). Terry Field, ScD (principal investigator); Jane Zapka, ScD; and Karin Valentine Goins, MPH.
Healthy People 2010: Understanding and Improving Health.
1. US Department of Health and Human Services. Washington, DC: US Dept of Health and Human Services; 2000. 2. US Preventive Services Task Force. Guide to Clinical Preventive Services. 2nd ed. Baltimore, Md: Williams & Wilkins; 1996.
3. US Preventive Services Task Force. Screening for breast cancer: recommendations and rationale. Available at: http://www.ahrg.gov/clinic/3rduspstf/breastcancer/brcanrr.htm. Accessed February 25, 2002.
4. Smith RA, Cokkinides V, von Eschenbach AC, et al. American Cancer Society guidelines for the early detection of cancer. CA Cancer J Clin. 2002;52:8-22.
The Obstetrician-Gynecologist and Primary Preventive Health Care.
5. American College of Obstetrics and Gynecology. Washington, DC: American College of Obstetricians and Gynecologists; 1993.
6. Feig SA, D'Orsi CJ, Hendrick RE, et al. American College of Radiology guidelines for breast cancer screening. AJR Am J Roentgenol. 1998;171:29-33.
Med Care.
7. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. 2001;39:II2-II45.
8. Sonnad S. Organizational tactics for the successful assimilation of medical practice guidelines. Health Care Manage Rev. 1998;23: 30-37.
Eff Clin Pract.
9. Solberg LI, Kottke TE, Brekke ML. Variation in clinical preventive services. 2001;4:121-126.
10. Ockene J, Zapka J, Pbert L, Brodney S, Lemon S. Provider, system and policy strategies to enhance the delivery of cancer prevention and control activities in primary care. (Background paper) The Institute of Medicine Report: Fulfilling the Promise of Cancer Pre-vention and Early Detection. Washington, DC: National Academy Press; 2003.
Health Promotion and Disease Prevention in Clinical Practice.
11. Thompson R, Woolf S, Taplin S, et al. How to organize a practice for the development and delivery of preventive services. In: Woolf S, Jonas S, Lawrence R, eds. Baltimore, Md: Williams & Wilkins; 1996.
12. Blustein J. Medicare coverage, supplemental insurance, and the use of mammography by older women. N Engl J Med. 1995;332: 1138-1143.
Obstet Gynecol.
13. Lawson HW, Lee NC, Thames SF, Henson R, Miller DS. Cervical cancer screening among low-income women: results of a national screening program, 1991-1995. 1998;92: 745-752.
14. Yabroff KR, Kerner JF, Mandelblatt JS. Effectiveness of interventions to improve follow-up after abnormal cervical cancer screening. Prev Med. 2000;31:429-439.
Cancer Epidemiol Biomarkers Prev.
15. Mandelblatt JS, Yabroff KR. Effectiveness of interventions designed to increase mammography use: a meta-analysis of provider-targeted strategies. 1999;8:759-767.
16. Grimshaw JM, Russell IT. Achieving health gain through clinical guidelines II: ensuring guidelines change medical practice. Qual Health Care. 1994;3:45-52.
J Fam Pract.
17. Mandelblatt J, Kanetsky P. Effectiveness of interventions to enhance physician screening for breast cancer. 1995;40:162-171.
18. Solberg L, Kottke T, Conn S, Brekke M, Calomeni C, Conboy K. Delivering clinical preventive services is a systems problem. Ann Behav Med. 1997;19:271-278.
Am J Prev Med.
19. Rimer BK, Conaway MR, Lyna PR, et al. Cancer screening practices among women in a community health center population. 1996;12:351-357.
20. Solberg L, Brekke M, Fazio C, et al. Lessons from experienced guideline implementers: attend to many factors and use multiple strategies. Jt Comm J Qual Improv. 2000;26:171-188.
Am J Public Health.
21. Taplin SH, Anderman C, Grothaus L, Curry S, Montano D. Using physician correspondence and postcard reminders to promote mammography use. 1994;84:571-574.
22. Somkin C, Hiatt R, Hurley L, Gruskin E, Ackerson L, Larson P. The effect of patient and provider reminders on mammography and Papanicolaou smear screening in a large health maintenance organization. Arch Intern Med. 1997;157:1658-1664.
Am J Prev Med.
23. Lipkus IM, Rimer BK, Halabi S, Strigo TS. Can tailored interventions increase mammography use among HMO women? 2000;18:1-10.
24. McCarthy BD, Yood MU, Bolton MB, Boohaker EA, MacWilliam CH, Young MJ. Redesigning primary care processes to improve the offering of mammography: the use of clinic protocols by nonphysicians. J Gen Intern Med. 1997;12:357-363.
Am J Prev Med.
25. Binstock MA, Geiger AM, Hackett JR, Yao JF. Pap smear outreach: a randomized controlled trial in an HMO. 1997;13:425-426.
26. Thompson RS, Taplin SH, McAfee TA, Mandelson MT, Smith AE. Primary and secondary prevention services in clinical practice: twenty years' experience in development, implementation, and evaluation. JAMA. 1995;273:1130-1135.
Ann Behav Med.
27. Glasgow RE, Whitlock EP, Valanis BG, Vogt TM. Barriers to mammography and Pap smear screening among women who recently had neither, one or both types of screening. 2000;22:223-228.
28. Corrigan JM, Nielsen DM. Toward the development of uniform reporting standards for managed care organizations: the Health Plan Employer Data and Information Set (Version 2.0). Jt Comm J Qual Improv. 1993;19:566-575.
The State of Managed Care Quality 2000.
29. National Committee for Quality Assurance. Washington, DC: National Committee for Quality Assurance; 2000.
30. Hewitt M, Simone JV, eds. Ensuring Quality Cancer Care. Washington, DC: National Academy Press; 1999.
Cancer Epidemiol Biomarkers Prev.
31. Zapka JG, Taplin SH, Solberg LI, Manos MM. A framework for improving the quality of cancer care: the case of breast and cervical cancer screening. 2003;12(1): 4-13.
32. Davis DA, Taylor-Vaisey A. Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. CMAJ. 1997;157:408-416.
Jt Comm J Qual Improv.
33. Kaluzny AD, Konrad TR, McLaughlin CP. Organizational strategies for implementing clinical guidelines. 1995;21:347-351.
34. Landon BE, Wilson IB, Cleary PD. A conceptual model of the effects of health care organizations on the quality of medical care. JAMA. 1998;279:1377-1382.
BMJ.
35. Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings: the Cochrane Effective Practice and Organization of Care Review Group. 1998;317:465-468.
36. Wagner E. Chronic disease management: what will it take to improve care for chronic illness? Effect Clin Pract. 1998;1:2-4. 37. Wagner E. Managed care and chronic illness: health services research needs. Health Serv Res. 1997;32:702-714.
Med Care.
38. Shortell SM, Zazzali J, Burns L, et al. Implementing evidencebased medicine: the role of market pressures, compensation incentives, and culture in physician organizations. 2001;39: I62-I76.
39. Yin RK. Case Study Research: Design and Methods. Beverly Hills, Calif: Sage Publications; 1984.
Qualitative Data Analysis.
40. Miles M, Huberman A. Thousand Oaks, Calif: Sage Publications; 1994.
41. Lincoln Y, Guba E. Naturalistic Inquiry. Newbury Park, Calif: Sage Publications; 1985.
HEDIS 3.0 Narrative: What's in It and Why It Matters.
42. National Committee for Quality Assurance. Washington, DC: National Committee for Quality Assurance; 1997.
43. Shortell SM, Alexander J, Budetti P, et al. Physician-system alignment: introductory overview. Med Care. 2001;39:I1-I8.
Jt Comm J Qual Improv.
44. Brown JB, Shye D, McFarland B. The paradox of guideline implementation: how AHCPR's depression guideline was adapted at Kaiser Permanente Northwest Region. 1995;21:5-21.
45. Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME: a review of 50 randomized controlled trials. JAMA. 1992;268:1111-1117.
JAMA.
46. Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? 1999;282:867-874.
47. Rimer B, Conaway M, Lyna P, et al. The impact of tailored interventions on a community health center population. Patient Educ Couns. 1999;37:125-140.
Cancer.
48. Burack RC, Gimotty PA, George J, Simon MS, Dews P, Moncrease A. The effect of patient and physician reminders on use of screening mammography in a health maintenance organization: results of a randomized controlled trial. 1996;78:1708-1721.
49. Taplin SH, Barlow W, Urban N, et al. Stage, age, comorbidity, and direct costs of colon, prostate, and breast cancer care. J Natl Cancer Inst. 1995;87:417-426.
Med Care.
50. Solberg LI, Brekke ML, Kottke TE, Steel RP. Continuous quality improvement in primary care: what's happening? 1998;36:625-635.
51. Solberg L, Kottke T, Brekke M, Magnan S. Improving prevention is difficult. Effect Clin Pract. 2000;3:153-155.
Med Care.
52. Cretin S, Farley D, Dolter K, Nicholas W. Evaluating an integrated approach to clinical quality improvement: clinical guidelines, quality measurement, and supportive system design. 2001;39:II70-II84.
53. Marcus A, Crane L. A review of cervical cancer screening intervention research: implications for public health programs and future research. Prev Med. 1998;27:13-31.
Quality Determinants of Organized Breast Cancer Screening Programs.
54. Health Canada. National Committee of the Canadian Breast Cancer Screening Initiative; 1997. Available at: http://www.hc-sc.gc.ca/pphb-dgspsp/publicat/obcsp-podcs98/obcspr_e.html. Accessed July 2, 2003.
55. US Department of Health and Human Services. Clinical Practice Guideline Number 13: Quality Determinants of Mammography. Rockville, Md: Agency for Health Care Policy and Research; 1994. Report No. 95-0632.
Acta Cytol.
56. Mody DR, Davey DD, Branca M, et al. Quality assurance and risk reduction guidelines. 2000;44:496-507.
57. Gupta DK, Komaromy-Hiller G, Raab SS, Nath ME. Interobserver and intraobserver variability in the cytologic diagnosis of normal and abnormal metaplastic squamous cells in Pap smears. Acta Cytol. 2001;45:697-703.
N Engl J Med.
58. Elmore JG, Barton MB, Moceri VM, Polk S, Arena PJ, Fletcher SW. Ten-year risk of false positive screening mammograms and clinical breast examinations. 1998;338:1089-1096.