Publication
Article
Author(s):
Objectives: This analysis assessed the evolution of public reporting of provider performance in Aligning Forces for Quality (AF4Q) alliances, contrasted alliances that stopped reporting with those that plan to continue, and drew insights from alliance public reporting efforts for the national transparency movement.
Methods: Combined with document review, qualitative research methods were used to analyze interview data collected, over a nearly 10-year period, from the 16 participating alliances.
Results: AF4Q alliances made their greatest contributions to provider transparency in reporting ambulatory quality and patient experience measures. However, reporting ambulatory cost/efficiency/utilization measures was more challenging for alliances. Alliances contributed the least with respect to measures of inpatient performance. Six alliances ceased reporting at the end of the AF4Q program because of their inability to develop stable funding sources and overcome stakeholder skepticism about the value of public reporting. Insights provided by alliance leaders included the need to: focus on provider, rather than consumer, responses to public reports as the most likely avenue for improving quality; address the challenge of funding the reporting infrastructure from the beginning; explore collaborations with other entities to increase public reporting efficiency; and develop a strategy for responding to efforts at the national level to increase the availability of information
on provider performance.
Conclusion: The AF4Q initiative demonstrated that a wide variety of voluntary stakeholder coalitions could develop public reports with financial and technical support. However, the contents of these reports varied considerably, reflecting differences in local environments and alliance strategies. The challenges faced by alliances to maintain their reporting efforts were substantial, and not all alliances chose to report. Nevertheless, there are potential roles for alliances going forward in contributing to the national transparency movement.
Am J Manag Care. 2016;22:S382-S392
In this article, we examine the efforts of voluntary stakeholder coalitions (ie, alliances) to measure and publicly report provider performance as part of the Robert Wood Johnson Foundation’s (RWJF’s) Aligning Forces for Quality (AF4Q) initiative. (Details regarding these alliances and the communities they served are outlined in the article by Scanlon et al located in this supplement.1) Public reporting of provider performance was a key component of the overall strategy of the AF4Q initiative to improve the quality of care in alliance communities (see logic model in eAppendix A). Transparent quality measures could encourage consumers to choose higher-quality providers, enhance patient interactions with providers, and stimulate providers to undertake quality improvement activities to avoid losing patients and the stigma that might be attached to poor performance on quality measures. Health plans could use these measures to develop benefit designs that distinguish higher-quality from lower-quality providers (possibly rewarding consumers for choosing higher-quality providers) and implement pay-for-performance programs. This view of the possible benefits of public reporting was similar to that expressed by national advocates of greater provider performance transparency.2
Consistent with this emphasis, alliances were selected to participate in the AF4Q program in part because they had experience with, or expressed a willingness to implement, public reporting.3 However, our analysis (further detailed below) found substantial variation in the performances of alliances with respect to AF4Q public reporting. Some met the reporting goals and timelines established by the AF4Q program and planned to continue reporting after the program’s conclusion, others were less successful, and some stopped reporting entirely once the AF4Q program funding ended. Overall, in a separate analysis, we found that consumers in alliance communities had access to more provider performance information than available to residents of comparison communities4 and this remained true throughout the AF4Q initiative.5
To place the AF4Q program public reporting efforts in context, we begin by discussing national performance measurement and public reporting efforts prior to, during, and at the end of the AF4Q initiative. We summarize expectations for measurement and public reporting from the AF4Q program and how they changed over the course of the initiative. After describing our methods for data collection and analysis, we address 3 questions: (1) Across AF4Q communities, how did public reporting evolve and what factors drove that evolution? (2) At the end of the AF4Q program, what distinguished alliances that continued to report from those that stopped reporting? (3) What insights can be drawn from the alliance public reporting experience to inform national efforts to increase provider performance transparency?
Background
National Reporting Landscape
Reporting health plan quality measures began in the early 1990s6 under the auspices of the National Committee for Quality Assurance (NCQA). The federal Health Care Financing Administration (now CMS) and several states (eg, New York, Pennsylvania, and California) began reporting hospital quality measures during this time period, as well. By 2000, large employers were encouraging the reporting of physician performance as part of an overall healthcare reform strategy.7 Many health plans responded by reporting a limited number of provider quality measures to their members, or by “tiering” providers based on quality measures, but this information usually was not available to the general public. Private firms also entered the reporting arena (eg, Healthgrades and WebMD), with consumers typically paying for full access to their provider performance measures.
The National Quality Forum (NQF) was established in 1999 as a public—private partnership to “create a foundation for consistent data reporting and collection.”8 Incorporating measures developed by NCQA or endorsed by NQF subsequently became the “gold standard” for public reporting. In 2001, under the auspices of the Leapfrog Group, large employers began to publish information on hospital patient safety practices, using data voluntarily submitted by some hospitals. The Ambulatory Care Quality Alliance (the American Academy of Family Physicians, American College of Physicians, America’s Health Insurance Plans, and the Agency for Healthcare Research and Quality), established in 2004, produced a “starter set” of physician quality measures, which it pilot-tested in 2006. In the public sector, Hospital Compare was created by CMS in 2002, and in 2005 the first process of care measures were displayed on the Hospital Compare website, with patient experience measures added soon after.9
Clearly, by the time the first alliances were selected for the AF4Q program in 2006, there was considerable momentum for reporting provider performance, especially quality measures. Subsequently, several additional public-sector reporting efforts were initiated, beginning with the establishment of the Chartered Value Exchange (CVE) program by the Bush administration in 2008. Under this program, community organizations could apply for CVE designation, which entailed a commitment to publish provider quality information. In return, CVEs were to receive access to Medicare performance measurement results, along with technical assistance through a peer-learning network. Of the 24 organizations that received the CVE designation by 2012, 11 were AF4Q alliances.
Support for public reporting expanded under the Obama administration’s Affordable Care Act (ACA), which required CMS to share Medicare data with “qualified” local entities for use in reporting provider performance.10 In addition, advocacy groups encouraged the reporting of physician quality measures on ACA health insurance exchanges. Meanwhile, Medicare’s Physician Compare effort provided data on individual physician characteristics: in 2015, patient assessments of care and clinical quality measures became available for practices and groups. This effort also gave consumers access to information on whether individual providers or medical groups participated in various quality programs, including the Physician Quality Reporting System, the Million Hearts initiative, and the Electronic Health Records Incentive Program.11 In addition to its focus on physicians, CMS initiated public reporting for skilled nursing facilities and home healthcare.
Alongside these federal efforts, 43 states have developed, or plan to develop, all-payer claims databases, providing a resource that could be used in producing provider performance reports.12 Subsequent to the establishment of the AF4Q program, private-sector purchasers increased their support for the reporting of measures of cost and efficiency, in addition to quality measures,13 with the hope that consumers would consider “value” (defined as quality relative to dollars spent) when making their choices and that providers would be encouraged to compete based on the value of the services they provided.14
AF4Q Program Initiatives
Two of the first 4 alliances selected to participate in AF4Q were already publicly reporting provider performance, and a third was in the process of measure construction. However, there were still relatively few community coalition public reporting efforts nationally at the onset of the AF4Q program, although some coalitions were actively involved in Leapfrog reporting efforts.15 There were several aspects of locally produced reports that appeared promising (ie, adding value to national reporting efforts) and deserving of support in the AF4Q initiative. Locally produced reports could increase the number of different sources of performance information available in the community, raising the likelihood that consumers would become aware of provider performance measures. It was hoped that the reports would draw significant attention, because they were “locally developed,” and that providers would view them as credible, especially when they were involved in measure selection or development. Finally, to the degree that local reporting efforts were guided by input from community stakeholders, their measures might be more salient to community residents than those in national reports.3
When they joined the AF4Q program, alliances were charged with publicly reporting (within 3 years) measures of ambulatory quality in the treatment of chronic conditions for at least 50% of primary care physicians in their communities. Alliance leaders felt that their continued funding could very well depend on complying. Almost all alliances reached this goal, although report content varied considerably.3 Subsequently, expectations for alliances were expanded to include reporting patient experience measures for ambulatory care, hospital quality and patient experience measures, and measures of resource use, charge, price, cost and/or efficiency in inpatient and outpatient settings. Alliances were encouraged to make their reports “consumer -friendly” and to pursue different methods of disseminating reports. By the end of the AF4Q program in 2015, the alliances were expected to produce public reports that facilitated consumer selection of “high-value” providers.
In summary, the reporting goals for AF4Q alliances were very ambitious. The hope was that alliances would play a leading role in the emerging transparency movement by providing “models” for public reporting that could be adopted in other communities. During the nearly 10-year AF4Q initiative, the number of NQF and NCQA performance measures of all types expanded considerably, which aided alliances in selecting and constructing measures for their reports. In addition to funding, the AF4Q program provided alliances with technical assistance to support their reporting efforts.
Data Sources
We used 3 data sources in our analyses. We tracked the contents of alliance reports throughout the AF4Q program and constructed a longitudinal dataset for each alliance that contained information on the types of measures reported, start and stop dates for reporting of measures, level of reporting (eg, medical group or medical practice), data sources used to construct each measure (eg, claims, medical records, surveys), and frequency with which measures were reported (eg, annually, biannually). Using this dataset, we examined how reporting evolved across alliances and over time.
Beginning in 2006, and continuing through 2016, the evaluation team conducted 1100 semi-structured in-person and telephone interviews with alliance leaders, staff, and community stakeholders, with most addressing issues related to public reporting. This included 4 rounds of site visits to AF4Q communities, 10 rounds of telephone interviews with alliance leaders between 2007 and 2014, and 2 rounds of telephone interviews between 2012 and 2014 with alliance leaders specifically charged with managing pubic reporting efforts. The semi-structured interviews provided information about alliance commitment to reporting, alliance reporting strategies, and the challenges alliances encountered. Latter rounds of interviews also addressed alliance decisions about whether or not to sustain reporting after the AF4Q program and provided insights regarding how multi-stakeholder coalitions might contribute most effectively to national provider transparency efforts. Telephone interviews also were conducted with key national thought leaders in healthcare to elicit their views on the impact of alliance reporting efforts. (The process for collecting and analyzing these interview data is described in detail in the summative research design article, which is available online at www.ajmc.com.16)
Our third source of data consisted of AF4Q program documents, including (1) information on guidelines and performance targets established by RWJF and the AF4Q’s national program office, (2) funding proposals submitted by alliances throughout the course of the AF4Q program, and (3) progress summaries that alliances submitted to the national program office every 4 months beginning in 2011.
Evolution of Public Reporting Across Alliances: Achievements and Challenges
Of the 16 alliances, 5 reported provider performance (ambulatory or inpatient) prior to entering the AF4Q program, and 2 more were engaged in measure selection and development, with the intent of reporting; the remaining alliances began the public reporting process after joining the AF4Q initiative. Given their different starting points, it is not surprising that strategies for reporting, and the contents of reports, varied across alliances. Reports also evolved over time in response to new AF4Q program expectations, fluctuating stakeholder support, and experience gained with reporting and its consequences.
As in previous work, we view alliance report production as beginning with measure selection and continuing through measure construction and dissemination, with alliances revisiting their decisions during the AF4Q program (Figure). We extend findings from prior work, which included alliance reporting activities from 2006 to 2012,3 by addressing the latter part of the AF4Q initiative (2012-2015; see tables in eAppendix B). In the first section, we focus on measure selection, construction, and dissemination, examining 3 types of measures (quality, patient experience, and cost/efficiency/overutilization) for ambulatory and inpatient care. We discuss the evolution in reporting for each type and the important factors influencing that evolution.
Selection and Construction of Ambulatory Measures
Quality. Consistent with AF4Q program expectations, by 2012, all alliances reported measures of quality of care for patients with chronic conditions.3 One respondent observed, “That’s what you had to do to get the grant.” But at this stage, the reports varied considerably in their scope and sophistication. For example, one alliance that reported measures prior to the AF4Q program included 24 process and intermediate outcome measures relating to vascular disease, diabetes, hypertension, and kidney disease in its report, while an alliance that joined the AF4Q program later, and had no prior reporting experience, reported 4 diabetes process measures. Overall, most of the alliances increased the number of publicly reported quality measures between 2012 and the end of the program, while 3 alliances reduced the number of measures and 2 stopped reporting altogether.
By 2012, all 16 alliances were using nationally endorsed measures in their reports, and in that year, 6 of the alliances began reporting composite performance measures of varying types, in addition to disaggregate measures. These approaches remained unchanged through 2015. Alliances used claims data and medical records data to construct measures, with no clear trend toward more alliances using one source or the other during the nearly decade-long initiative. In 2015, 8 alliances relied on administrative data, 5 used medical records, and 1 combined these data sources.
These were the data sources used by the same alliances in 2012, as well. In contrast, some alliances altered the level at which they constructed and reported measures. In 2012, 5 alliances reported quality measures only at the medical group or large physician organization level, while 10 reported at the practice level and 1 at the individual level. As of 2015, of the 14 reporting alliances, 13 reported at the practice level and 1 reported only at the physician organization level. (The alliance that reported at the individual physician level stopped reporting altogether.)
Alliance efforts to include information from more patients and providers in measure construction were met with several obstacles. Because supplying data was costly, maintaining participation on the part of data suppliers (health plans or providers) often was challenging for alliances. When claims data were used for measure construction, alliances needed to integrate data provided voluntarily by health plans, Medicaid programs, and/or self-insured providers. Therefore, depending on the decisions of data suppliers, alliance reports at different points of time could reflect care provided to different subsets of community residents. One respondent noted that, by the end of the AF4Q program, health plans in the community had just grown “weary” of participating in alliance public reporting efforts, while another observed that a number of large self-insured employers in its community had declined to supply data for public reporting. Alliances that depended on providers to submit data from medical records for measure construction faced a similar challenge and, based on observations of interview respondents, the percentage of participating providers appeared to vary considerably across alliance communities. (Alliances typically did not track this number.) Consequently, there was variation in the ability of consumers within different alliance communities to compare performance across providers.
Patient Experience. Some alliance leaders felt that reporting ambulatory patient experience measures was a valuable extension of their reporting efforts. These measures were applicable to all consumers, in contrast to the much more limited applicability of quality measures for a specific chronic illness. In 2009, 2 alliances participated in a Consumers’ CHECKBOOK pilot effort to test the feasibility of reporting patient experience at the individual physician level. Although neither alliance continued reporting beyond the pilot study, in 2010, the AF4Q program provided supplemental pilot funding to 3 other alliances to incorporate Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) questions into the existing survey instruments of a limited number of medical groups in their communities and to publicly report results. Two of the 3 alliances continued reporting after the pilot program. Stakeholder support for reporting patient experience grew throughout the AF4Q program, partly due to a national focus on improving the experience of patients in the healthcare system.17 By 2015, 12 of the 16 alliances had reported ambulatory patient experience measures at least once, and 5 reported on a regular basis. A leader of one of these 5 alliances said that ambulatory patient experience reporting may have been their primary contribution to provider transparency in the community.
The development and increasing use of CG-CAHPS measures at the national level made measure selection less of an obstacle to reporting. Despite this, fielding patient surveys to collect the data needed to construct these measures added to alliance reporting costs, and finding the necessary funds proved to be a significant obstacle for many alliances. Alliances responded by using different combinations of membership dues, grants, and state support. However, one respondent expressed hope for the future, observing that hospital acquisitions of physician practices made funding easier to obtain; hospitals possessed more resources to support the effort than did individual physician practices and had a history of gathering patient experience data.
Cost/Efficiency/Overutilization. Alliances were asked to report NQF-endorsed measures of resource use, charges, price and cost, or efficiency by July 2011, and to report for 50% of community providers by 2013. By 2012, 7 alliances had accomplished this objective by reporting measures of overutilization, with 1 also reporting a cost measure. The alliances typically reported 4 or fewer measures, with “avoidance of antibiotic treatment for bronchitis” and “appropriate imaging for low back pain” being the most common. By 2015, 9 alliances were reporting measures in this category, with the number and types of measures changing very little between 2012 and 2015, with 2 exceptions. One alliance added 3 cost measures (total cost of care for all patients, total cost of care for adults, and total cost of care for pediatrics) and a second added total cost of care for adults. Of these 9 alliances, 6 reported at the practice level, 2 at the group level, and 1 at both levels.
In general, alliances found that reporting ambulatory cost/efficiency performance was a daunting task, relative to reporting ambulatory quality measures. There were fewer nationally endorsed measures to draw from, and stakeholder support (especially support from health plans that sometimes viewed cost data as proprietary) was more difficult to achieve. One alliance respondent observed that “utilization and price [are] way closer to the pocketbooks of organizations and individuals than quality.” Constructing cost/efficiency measures was technically challenging, as well. Alliances that used claims data to report ambulatory quality measures employed the same source to construct cost/efficiency/overutilization measures where possible. (However, some faced contractual restrictions on use of claims data from health plans for this purpose.) None of the alliances that used only medical records data to report quality measures issued any reports containing ambulatory cost/efficiency/overutilization measures. Some tried to access state-based all-payer claims databases for this purpose, but faced limitations on who could use these data and for what purposes.
Beyond issues related to data access and the complexity of measure construction, arguably the largest obstacle to reporting in this area was a lack of agreement among alliance leadership and stakeholders concerning the usefulness of the effort. One alliance leader summarized this point of view: “I think the biggest challenge we have is that nobody seems to care. The consumer doesn’t use it, and nobody wants to pay for it.” In contrast, 1 alliance played a leadership role nationally in reporting an ambulatory “total cost of care” measure, while another felt that reporting cost/efficiency energized the alliance and placed it in a pivotal position to engage in healthcare reform in its community.
Selection and Construction of Inpatient Measures
Quality. In 2008, the Foundation instructed the alliances to add inpatient quality measures to their public reports (and, subsequently, patient experience and cost/efficiency measures). Alliances could meet this new inpatient reporting requirement by using measures from Medicare’s Hospital Compare program; simply providing a link to the Hospital Compare website would not fulfill the hospital quality-reporting objective. Ten alliances reported Hospital Compare quality measures only, and an additional 4 alliances supplemented those measures with measures constructed from other data sources. Another alliance selected measures from other national reports and its state hospital association, and 1 chose not to report inpatient quality at all during this time period. Four alliances used the same technical assistance provider to help them populate their reports with Medicare inpatient measures. They reported an identical set of Hospital Compare measures that remained essentially unchanged throughout the AF4Q program. By the end of the initiative, only 9 alliances were still reporting inpatient quality measures.
Given alliance access to measures in Hospital Compare and other reports for inpatient quality reporting, measure selection was relatively straightforward and measure construction was not required (Figure). The primary challenge was convincing stakeholders that the effort was necessary. Because hospital quality reporting was already being done by Hospital Compare and other entities, stakeholders were skeptical that disseminating the same measures through alliance websites constituted a significant contribution to quality improvement in their communities.
Patient Experience. Fourteen of the 16 alliances reported inpatient experience, and all used Hospital Compare to meet this expectation. However, although these alliance reports added another point of access to inpatient experience measures for consumers, the measures did not expand the information already available. As with ambulatory patient experience measures, there was strong support among alliance stakeholders for reporting hospital patient experience and, as with measures of inpatient quality, importing the measures from Hospital Compare was relatively straightforward.
Cost/Efficiency/Overutilization. Alliances were given considerable latitude regarding which measures would satisfy reporting requirements in this area. As they did with hospital quality, virtually all of the reporting alliances extracted measures from Hospital Compare. By 2015, 7 of the 16 alliances included these measures in their reports, with Medicare procedure costs and readmission rates being the most common. Measures drawn from Hospital Compare were based only on a subset of community residents—Medicare beneficiaries. Also, Medicare procedure costs are of limited usefulness for consumer decision making.
Dissemination of Public Reports
Once the alliances selected and constructed provider performance measures, they faced the challenge of how to effectively disseminate the measures to potential users, including clinicians, insurers, and consumers (Figure). All alliances posted their performance measures on websites that they maintained. Alliances sought out media coverage of the release of their reports and found it was an effective way to garner attention. This attention, however, was harder to generate on an ongoing basis as report releases became more routine.
Over time, RWJF placed more emphasis on dissemination of public reporting to consumers. In 2009, the Foundation informed alliances that to increase consumer awareness and use of comparative provider information, they should focus on making their websites more consumer friendly and engage in new outreach strategies. To monitor their success, alliances were required to report on the number of visits to their websites and page views related to their public reports.18 However, a separate analysis suggested that there were not substantial improvements in the consumer friendliness of alliance websites over the last 5 years of the AF4Q program. Alliances that were relatively new to reporting suggested that this was because they needed to devote their resources to report production.18 Also, some alliances remained unconvinced that consumers had interest in this type of information, citing provider resistance to the use of more intuitive, consumer-friendly measures of provider performance (eg, star ratings).
Disseminating reports to consumers was a challenge because few alliances had existing consumer constituencies for support. Instead, alliances sought to reach consumers through employer stakeholders, traditional media, and social media. The level of commitment to doing so was mixed, as not all alliances felt that the information in their reports was likely to be valued by consumers. One alliance leader said that healthcare professionals constituted the primary group interested in the reported measures; they were “the real consumer group that we’ve actually targeted.” In contrast, several alliances tried innovative approaches to make their reports more accessible to consumers. These included having a reporting website targeted only at consumers and including educational health messages to complement specific provider measures on the website.
The effort that received the greatest attention was an experimental collaboration between the Consumers Union (the policy and action division of Consumer Reports) and 3 alliances to include their performance measures in local issues of Consumer Reports magazine. This generated a very large number of new visits to the alliance reporting websites, suggesting that there was a potential market for their performance data. Building on the Consumers Union pilot study, RWJF funded the DOCTOR Project, which provided guidance and funding for an additional 8 AF4Q alliances and 2 other stakeholder collaboratives to distribute their provider performance data through Consumer Reports.
Summary of the Evolution of Alliance Public Reporting Efforts
AF4Q alliances increased consumer access to measures of ambulatory quality and patient experience; through their reporting and dissemination activities, they expanded both the type and amount of information available to consumers (Figure). In a separate analysis, we compared public reporting activities in alliance communities with a set of comparison sites. We found that consumers in alliance communities had access to more information overall, with the difference remaining relatively stable over time. The exception was patient experience, where the advantage enjoyed by consumers in alliance communities grew. For more information, see Christianson et al.5
To date, alliances have made fewer contributions to ambulatory cost/efficiency/overutilization transparency. A small number of alliance leaders remain enthusiastic, but the challenges seem likely to discourage a majority of alliances from expanding their efforts in this area. Most alliance leaders and stakeholders viewed inpatient performance reporting as an area already “occupied” by other state or national organizations. Consequently, one respondent observed that the alliance “checked the boxes” on reporting inpatient measures so as to not jeopardize continued AF4Q funding; this is consistent with limited stakeholder support for more ambitious efforts. Report dissemination efforts varied widely, but, in general, alliances struggled to develop and implement effective means to reach consumers. Explanations included lack of stakeholder support, skepticism concerning whether consumers valued the information, and competing demands on alliance and stakeholder resources.
Sustaining Public Reporting by Alliances
It would be reasonable to expect that alliances would sustain their public reporting efforts after the AF4Q program, given the high priority that RWJF assigned to public reporting and the funds and technical assistance the Foundation provided to support alliance reporting activities. At the conclusion of the AF4Q initiative, based on our interview data, 6 alliances seemed highly likely to continue their public reporting efforts, 4 appeared less certain, and 6 were not planning to construct any new reports. In this section, we discuss what distinguishes alliances that plan to continue their public reporting efforts from those that have stopped or plan to stop reporting.
Alliances That Will Continue
Based on respondent interviews, 2 general themes emerged for most, but not all, of these 6 alliances. First, alliance leadership viewed public reporting as central to the alliance’s identity, or “brand.” Five of the 6 alliances were reporting provider performance, or were in the measure selection process, before they were chosen to participate in the AF4Q program. Their stakeholders were committed to public reporting, independent of AF4Q program expectations and support. One respondent observed that “it’s what we do.” Another person said, “We did it before, and we’ll do it after.” A second theme, consistent with the first, is that most of these alliances had relatively stable funding sources, unrelated to the AF4Q initiative, which could be used for reporting. One respondent characterized the AF4Q program as a “blip” in the alliance’s history and certainly not the only driver of the alliance’s priorities or funder of its activities.
Still, these 6 alliances varied in how they viewed their public reporting futures. In one alliance, leadership felt that the public reporting field in the community was “crowded.” Consequently, the alliance planned to reduce the number of measures reported and the frequency of reporting, focusing mostly on ambulatory patient experience. In another alliance, leaders were working to expand alliance funding from community employers, which they believed to be important for maintaining the alliance reporting infrastructure going forward. In a third alliance, inpatient measures were dropped from its report, as stakeholders felt that they were “least used” among its measures; this alliance was planning to focus future efforts on cost/efficiency/overutilization measurement, where its stakeholders saw greater value. A fourth alliance was playing a significant role in its community’s healthcare reform efforts; its leaders planned to reduce alliance emphasis on public reporting but continue it in the near term. These examples illustrate the significant variation among the alliances that anticipated continuing to report in how they envisioned their reporting futures.
Alliances Where Continued Reporting Appears Less Certain
Four alliances have less certain reporting futures, for various reasons. In one case, the alliance was moving reporting responsibilities to another community entity, raising questions about whether public reporting will be sustained after the transition. Second, in 3 instances where alliance leaders expressed a desire to continue reporting, there was not a clear plan to fund it. All were exploring the feasibility of raising funds from various sources, including health plans, local foundations, providers, and state grants. Two of these alliances lagged behind in updating existing reports and had experienced some erosion of stakeholder participation or funding. The leader of another alliance suggested that it would continue to measure and report privately as part of its quality improvement activities, but expressed skepticism that the public reporting of measures was beneficial to consumers.
Alliances That Will No Longer Report
Of the 6 alliances that discontinued public reporting, none were engaged in public reporting of provider performance prior to the AF4Q program. In most cases, their leaders and stakeholders initially were skeptical that reporting would lead consumers to choose higher-quality providers. This was true for alliances located in rural areas, where physician practices were full and there was concern about maintaining an adequate physician supply; however, similar sentiments were expressed by respondents associated with alliances in urban settings. Stakeholders saw more promise in devoting community time and resources to other endeavors (such as supporting provider quality improvement activities or identifying and publicizing pressing local healthcare issues), rather than continuing their public reporting efforts.
As with some of the “uncertain” alliances, those that will no longer report failed to establish funding sources to replace the AF4Q grant, given a general lack of stakeholder enthusiasm for reporting. One respondent observed that the alliance was too small to sustain public reporting and succinctly summarized the issues: reporting was “labor- and financially-intensive,” the “value proposition” was uncertain, and “there was no real demand for it.”
Among the alliances terminating reporting efforts, 1 had a lengthy community history that did not involve reporting. Moreover, the alliance ceased operations altogether in the midst of leadership changes and funding challenges. In 2 other instances, new voluntary organizations formed to replace alliances in their communities; these organizations were early in their development, but did not plan to publicly report provider performance. Leaders in the remaining alliances planned to narrow their organizational focus, believing their organizations could play a valuable community role as “neutral conveners” without incurring the financial, technical, and sometimes political risks associated with public reporting.
Insights From the Public Reporting Experiences of AF4Q Alliances
AF4Q alliances experienced various degrees of success in expanding and sustaining their reporting activities during the last years of the initiative. This is not surprising, given the diverse characteristics of the alliances and the environments in which they operated. Even the termination of reporting by some alliances might have been anticipated. As one alliance director observed, “It was never a reasonable vision that we’re going to have 50 regional reporting entities.” With respect to public reporting, one could characterize the AF4Q initiative as investing in a number of “risky” enterprises, with the expectation that successes and failures might contribute in useful ways to the national transparency movement. From this perspective, the insights that alliance directors gained from their experiences are instructive.
Focus on Provider Responses as the Most Likely Means Through Which Public Reporting Could Improve Quality
From the perspective of alliance directors, the AF4Q program emphasized consumer responses to public reports as the primary driver of improved quality. By the end of the AF4Q initiative, however, most alliance directors did not believe that the “competitive market strategy” (use of report measures by consumers when choosing providers, which, in turn, stimulates quality or cost competition among providers for patients) would improve provider quality or efficiency. In their experience, too few consumers sought out and used the information in this way or there was not enough capacity in community health systems to accommodate significant shifts in patients among providers. Instead, they came to view provider responses to the content of public reports as the most likely pathway through which public reporting could improve quality of care. This suggests that involving community providers in measure selection and construction is essential to the success of alliance reporting efforts (Figure). Providers are the key stakeholders in constructing alliance reports; they provide data from medical records for measure construction and vet measures constructed with medical records or claims data. Without provider involvement, measures in alliance reports would be less credible and providers would be less likely to respond to their content by undertaking quality improvement activities. According to respondents, securing provider involvement and commitment requires substantial “up-front” work; production of the first report may take longer than expected as a result.3
Address the Challenge of Funding the Public Reporting Infrastructure From the Beginning
Most alliance directors underscored the need to develop a stable funding source for alliance measurement and dissemination activities. The failure of some alliances to do so early in their existence was seen as a prominent factor in the subsequent demise of their reporting efforts. The most successful funding model employed by alliances relied on stakeholder “dues,” a portion of which were used to support the reporting infrastructure. However, in light of the often tenuous nature of voluntary community coalitions, and the shifting environments in which they operate, even this approach might not sustain public reporting over time.
Grant funding, because of its sporadic nature, was seen as an unreliable option for addressing this need. The challenge of funding the reporting infrastructure led some alliance directors to speculate about other options. One observed that local reporting activities, if they were valued, might eventually need to be funded, and possibly conducted, by state governments. Six of the alliances were located in states that had established all-payer claims databases, and 2 other alliances were in states that were taking steps to establish such databases. Although these databases do not include Medicare and, to some degree, self-insured employer data, their existence does offer the potential for states to assume a leading role in supporting the public reporting infrastructure. However, if states were to allocate funding for this infrastructure, alliances could lose some operational flexibility, and provider commitment might be more difficult to sustain.
Explore Collaborative Approaches to Increase Efficiency
Some alliance directors felt that inefficiency was built into the reporting process because each alliance has its own “backroom” process for data submission and measure construction. Assuming there are economies of scale, it may make sense for alliances to centralize these activities while retaining overall guidance and measure selection at the local level.19 However, in practice, there were relatively few attempts at collaboration among alliances to improve reporting efficiency. Collaboration was observed more often among alliances and other community organizations, as alliances attempted to meet AF4Q program reporting goals and increase their funding bases. For example, some alliances supported existing hospital association reporting efforts as a way to avoid what they saw as “unnecessary” duplication of inpatient reporting in their communities. Others attempted to partner with state agencies that maintained all-payer claims databases. Going forward, the Center for Healthcare Transparency, an effort of the Network for Regional Healthcare Improvement, is exploring ways to improve efficiency in public reporting, including the development of a national network of regional data intermediaries. One goal is to build the capacity to generate measures across multiple data sources.20 Five alliances have been early participants in these efforts.
Develop a Strategy for Responding to the National Efforts to Increase Provider Transparency
The national attention given to improving provider transparency was a mixed blessing, according to some alliance leaders; initially, although it helped solidify local stakeholder support for reporting, it also stimulated other organizations to report provider performance. More health plans are offering a broader array of provider performance information to their members, especially measures of cost and efficiency, while state hospital organizations and private companies also have expanded their reporting efforts. With these information sources now available to consumers, community stakeholders are more likely to question the need for continued production of public reports by alliances.
Similarly, the growth in nationally endorsed measures has accelerated the process of alliance report development, reduced costs, and helped alleviate provider concerns about the legitimacy of report contents. However, one alliance director observed that a national consensus on measures to be included in reports could turn reporting into an activity that could be carried out by any organization. If this happens, alliances still could make valuable contributions by focusing on resolving local stakeholder issues around reporting, assisting in the dissemination of reports produced by others, and facilitating provider responses to report contents, with states or national bodies assuming day-to-day reporting responsibilities, including dataset and website maintenance. In addition, alliances could continue to collaborate with national bodies in developing and reporting new measures, such as those related to ambulatory patient experience or cost of care, that are not yet available in other reports. By doing so, alliances could position themselves as innovative “petri dishes” for expanding and refining provider transparency.
Conclusion
The AF4Q initiative demonstrated that public reporting by community coalitions is feasible, if supported by stable funding combined with technical assistance. Some alliances made important contributions to the national transparency movement in the areas of measure development and testing. Also, they were able to generate momentum for greater provider performance transparency within their communities. However, even with financial and technical support, there was wide variation in alliance stakeholder enthusiasm for public reporting, the contents of reports, and report dissemination strategies. When AF4Q program funding ended, one-third of alliances terminated their reporting efforts, and the continuation of reporting by several others seemed uncertain, underscoring the difficulties faced by alliance leaders in generating and maintaining stakeholder commitment and financial support for public reporting.
The public reporting “space” is more crowded, both locally and nationally, than when the AF4Q program began, making the continued existence of any single reporting effort less critical to transparency at the community level. However, the larger question of what entities (eg, federal government, state governments, provider associations, alliances, or some combination) are best positioned to assume broad responsibility nationally for providing and paying for transparency remained unresolved at the end of the AF4Q program. The findings of our evaluation do suggest that unless alliances and their participating stakeholders see intrinsic value in reporting, funding and technical assistance may not be sufficient to motivate reporting efforts. Then, as noted above, basic community-level public reporting efforts may need to be carried out by government entities that believe reporting has value for their programs. Whatever the ultimate resolution, if alliances do not have full responsibility in producing public reports, they still could provide value by monitoring other public reporting efforts from a community perspective and assisting in dissemination of reports produced by others.
Funding source: This supplement was supported by the Robert Wood Johnson Foundation (RWJF). The Aligning Forces for Quality evaluation is funded by a grant from the RWJF.
Author disclosures: Dr Christianson, Dr Greene, Dr Scanlon, and Ms Shaw report receipt of grants from RWJF. Dr Scanlon reports meeting or conference attendance for RWJF. Dr Greene reports meeting or conference attendance for Insignia Health.
Authorship information: Concept and design (JBC, JG, DPS); acquisition of data (JBC, DPS, BWS); analysis and interpretation of data (JBC, JG, DPS, BWS); drafting of the manuscript (JBC, DPS, BWS); critical revision of the manuscript for important intellectual content (JBC, JG, DPS); obtaining funding (JBC, DPS); administrative, technical, or logistic support (JBC, BWS); and supervision (JBC).
Address correspondence to: chris001@umn.edu.
REFERENCES
1. Scanlon DP, Beich J, Leitzell B, et al. The Aligning Forces for Quality initiative: background and evolution from 2005 to 2015. Am J Manag Care. 2016:22(suppl 12):S346-S359.
2. Mehrotra A, Hussey PS, Milstein A, Hibbard JH. Consumers’ and providers’ responses to public cost reports, and how to raise the likelihood of achieving desired results. Health Aff (Millwood). 2012;31(4):843-851. doi: 10.1377/hlthaff.2011.1181.
3. Christianson JB, Volmar KM, Shaw BW, Scanlon DP. Producing public reports of physician quality at the community level: the Aligning Forces for Quality initiative experience. Am J Manag Care. 2012;18(suppl 6):S133-S140.
4. Christianson JB, Volmar KM, Alexander J, Scanlon DP. A report card on provider report cards: current status of the health care transparency movement. J. Gen Intern Med. 2010;25(11):1235-1241. doi: 10.1007/s11606-010-1438-2.
5. Christianson JB, Shaw BW. Did Aligning Forces for Quality (AF4Q) Improve Provider Performance Transparency? Evidence from a National Tracking Study. University Park, PA: Penn State; 2016. http://hhd.psu.edu/media/CHCPR/alignforce/files/2016_PR_Research_Summary.pdf.
6. Christianson JB, Ginsburg PB, Draper DA. The transition from managed care to consumerism: a community-level status report. Health Aff (Millwood). 2008;27(5):1362-1370. doi: 10.1377/hlthaff.27.5.1362.
7. Galvin R, Milstein A. Large employers’ new strategies in health care. N Engl J Med. 2002;347(12):939-942.
8. Kizer KW. Establishing health care performance standards in an era of consumerism. JAMA. 2001;286(10):1213-1217.
9. Hospital Compare. CMS.gov website. https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/hospitalqualityinits/hospitalcompare.html. Updated May 4, 2016. Accessed May 23, 2016.
10. Medicare continues effort to give consumers more information on health care quality [press release]. Baltimore, MD: CMS; November 21, 2012. https://www.cms.gov/Newsroom/MediaReleaseDatabase/Press-releases/2012-Press-releases-items/2012-11-21.html. Accessed May 23, 2016.
11. Physician Compare. Medicare.gov website. https://www.medicare.gov/physiciancompare/search.html. Accessed May 23, 2016.
12. Interactive state report map. All-Payer Claims Database Council website. http://apcdcouncil.org/state/map. Accessed May 23, 2016.
13. Romano P, Hussey P, Ritley D. Selecting quality and resource use measures: a decision guide for community quality collaboratives. Agency for Healthcare Research and Quality website. http://www.ahrq.gov/sites/default/files/publications/files/perfmeas.pdf. Published May 2010. Accessed May 23, 2016.
14. Ryan AM, Tompkins CP. Efficiency and value in healthcare: linking cost and quality measures. Washington, DC: National Quality Forum; November 14, 2014.
15. Scanlon DP, Christianson JB, Ford EW. Hospital responses to the Leapfrog Group in local markets. Med Care Res Rev. 2008;65(2):207-231. doi: 10.1177/1077558707312499.
16. Scanlon DP, Wolf LJ, Alexander JA, et al. Evaluating a complex, multi-site, community-based program to improve healthcare quality: the summative research design for the Aligning Forces for Quality initiative. Am J Manag Care. 2016:22(suppl 12):eS8-eS16.
17. Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71(5):522-554. doi: 10.1177/1077558714541480.
18. Greene J, Farley DC, Christianson JB, Scanlon DP, Shi Y. From rhetoric to reality: consumer engagement in 16 multi-stakeholder alliances. Am J Manag Care. 2016:22(suppl 12):S403-S412.
19. Freiden J. Network to make more regional health data available. MedPage Today website. www.medpagetoday.com/PublicHealthPolicy/HealthPolicy/55651. Published January 13, 2016. Accessed May 23, 2016.
20. CHT measure dashboard complementary to Catalyst for Payment Reform employer-purchaser guide to quality measure selection. The Network for Regional Healthcare Improvement website. www.nrhi.org/news/cht-measure-dashboard-complementary-to-catalyst-for-payment-reform-employer-purchaser-guide-to-quality-measure-selection/. Published October 30, 2015. Accessed May 23, 2016.