Publication
Article
Author(s):
Recommendations on the best improvements CMS can make in the near term through the current rule-making process in establishing, updating, and resetting ACO financial benchmarks.
Over the past 2 months, accountable care organizations (ACOs) and stakeholder groups have spent considerable time drafting comments in response to the 400-page proposed Medicare Shared Savings Program (MSSP) rule in time for the just-passed deadline of February 6.1,2 Clearly, the most challenging aspect of the program is how best to calculate establishing, updating, and resetting an ACO’s financial benchmark. While CMS did not propose any specific improvements in calculating the benchmarks, the agency did devote more than 50 pages in the proposed rule to discussing several options, and asked for public comment on these and any other alternatives. CMS is well aware that the ACO provider community believes the benchmarking is flawed, and desires to learn how improvements can be made. Because the program has produced only 1 year of performance results to date, CMS knows that neither they nor the ACO provider community have the empirical evidence to determine how best to revise their methods. For these reasons, and based on what we do know or believe, the following recommendations are offered for debate.
Establishing the Benchmark
At least for the near term, establishing an ACO’s financial benchmark—the amount an ACO has to spend less than to earn shared savings—should continue to be based on the historical costs of those Medicare beneficiaries who would have been assigned to the ACO during the 3 years prior to its 3-year contract period. This is the best, most complete way of risk adjusting. The 3 benchmark years should continue to be weighted 10% (BY1), 30% (BY2), and 60% (BY3), respectively, because as CMS has argued, the most recent benchmark year most accurately reflects beneficiary health status and expenditures.
Among other criticisms, it is argued that the current method rewards high-cost or less efficient providers with a higher established benchmark, and therefore, a higher likelihood of success. This is a legitimate complaint; ACOs with high benchmarks were significantly more likely to have earned shared savings in 2012 and 2013—the program’s first performance year (PY1).3 However, the trade-off here is that these are the physicians whose practice patterns the MSSP needs to influence. Also, because the MSSP is a volunteer program, CMS cannot prevent ACOs from forming in high Medicare cost regions.
Resetting the Benchmark
Again, at least for the near term, resetting the benchmark should continue to be calculated similarly to how it is established, but with 2 changes. First, in subsequent contract periods, as CMS proposes, the 3 benchmark years should be re-weighted from 10-30-60 to 33-33-33. Currently, over-weighting the third benchmark year at 60% will potentially have the effect of punishing success and rewarding failure. In a second 3-year contract, ACOs that spent below their benchmark—particularly if they increasingly spent below their benchmark—would be punished because their reset benchmark would be set excessively lower. Meanwhile, ACOs that spent above their benchmark would be rewarded because their reset benchmark would be set excessively higher. As CMS notes, equally weighing the 3 benchmark years “could more gradually lower the benchmarks of ACOs that perform[ed] well in their first agreement period.”
Second, CMS proposes to add the savings an ACO earns under a prior contract period into the reset benchmark amount. This makes sense since the savings added does not negate the savings achieved. CMS recognizes “achieving savings” becomes, as the agency states in the proposed rule, “financially unattractive...in future agreement periods.” The agency recognizes that under current program rules, an ACO is forced to compete against itself or chase diminishing returns in each subsequent contract period. CMS proposes to add only those savings earned or awarded to the ACO. (Under the MSSP Track 1, the track in which over 98% of the 405 ACOs participate, this is 50% of total savings.) If the loss of achieved savings makes continuing participation unattractive, logically CMS should not correct the problem halfway; instead, it could contribute 100% of achieved savings into the reset benchmark.
Updating the Benchmark
Currently, CMS updates an ACO’s benchmark year over year, or for PY2 and PY3, using a national growth rate flat dollar amount. This method is problematic since a national rate is only marginally related to regional or local market circumstances, which are known to vary widely. For this reason and because a regional update would reduce the difference between Medicare Advantage (MA) and ACO rate setting, MedPAC and others argue benchmarks should be updated, and possibly reset as well, using a regional or local area cost growth factor.4 One way of doing this would be to blend national and regional rates over a number of years to transition gradually to a full regional update. Because regional rates can vary widely year to year, another approach would be to use whichever update value (national or regional), is higher.
CMS discusses 3 regional update options in the proposed rule, including replicating the method used in the Physician Group Practice (PGP) demonstration that updated the benchmark usinga regional reference population. Using regional costs could be an improvement; however, it is unclear how best to calculate risk adjusted regional Fee For Services (FFS) costs for an ACO population. Regardless, this approach still ignores an ACO’s actual performance. Because an ACO is a self-selected group of providers, its practice patterns may or may not accurately reflect regional costs and/or how its region’s costs are trending. This approach might favor outliers; in addition, it indiscriminately updates benchmarks for ACOs that are succeeding (or spending below their benchmark) and ACOs that are failing (or spending above their benchmark).
Presuming the goal is to calculate expected costs as accurately as possible, providing a regional update could be achieved in the context of an ACO’s prior year performance. Explained simply, if in PY1 an ACO saved $100, or spent $100 less than their benchmark, and the regional update was $100, its PY2 benchmark update would be $200. If the ACO lost $50, its update would be $50. Beyond improving accuracy, rolling savings into the update could negate the need to add savings—should they be earned—into the reset benchmark. This approach would also prevent gaming. An ACO can change its physician panel at any time, and thus its beneficiary population, thereby favorably affecting its benchmark.
At least 5 comments can be made. First, using the ACO’s actual population for benchmark updating in PY2 and PY3 helps to address the problem of unstable assignment since an ACO’s benchmark would remain based on or relevant to the actual population it was serving. Studies5 (also M. Chernew, T. McGuire, J.M. McWilliams, unpublished data, 2014) estimate that year over year an ACO loses about 20% of its initially assigned population because beneficiaries do not enroll in an ACO as they do in MA—they can, and do, change providers in any given year. This means in PY2 and PY3 the population an ACO serves is increasingly less representative of those beneficiaries upon which its established benchmark was based. Using a national average to update the benchmark actually makes unstable assignment both a problem because it is insensitive to patient churn or patient turnover, and consequently an unsatisfying remedy.
Second, by updating and resetting the benchmark using regional costs, CMS likely would not compound the windfall problem noted above; again, ACOs with higher established benchmarks are advantaged. Third, by avoiding or using the unstable assignment or patient churn problem, CMS is proposing to incent providers to accept risk via a new prospective assignment Track 3. Under prospective assignment, there would be limited patient churn in any 1 performance year because with limited exceptions, the ACO is responsible for its assigned beneficiaries for the entire performance year. Though prospective assignment could be attractive under Track 1, it becomes an unnecessary solution if CMS were to update benchmarks using the ACO’s actual population. Fourth, this approach would move CMS in the direction the agency says it is seeking: to have financial benchmarks based on an ACO’s actual assigned population (as in the Pioneer ACO demonstration). Finally, this approach is also possibly the most equitable—since it does not favor ACOs in high- or low-cost per member per year regions, in high- or low-cost growth regions, or some combination thereof.
There are other improvements to the MSSP that could help more ACO’s beat their benchmark. In PY1, less than 25% of ACOs earned shared savings. CMS is proposing to improve the current 2-step beneficiary assignment process by including nurse practitioners, physician assistants, and clinical nurse specialists in the first assignment step. Allowing beneficiaries to proactively assert or “attest” they are participants in an ACO would likely improve provider-patient affinity, thereby reducing beneficiary churn. Waiving payment rules would also likely help ACOs succeed in spending below their benchmark. For example, CMS may allow ACOs greater access to skilled nursing and home healthcare and allow for referrals to post acute providers, as well as expand use of telehealth services and improve patient engagement by reducing beneficiary cost sharing.
Regardless of how/if CMS changes ACO benchmarking and related program rules, it appears that what will ultimately determine an ACO’s success are quality scores.6 Currently, an ACO’s earned shared savings is multiplied by its quality score; if an ACO’s quality score is perfect, its multiplier is 1.0. Unlike MA quality scoring that is bonus only, ACO quality scoring is penalty only. This point aside, if we assume CMS will move toward updating (and resetting) ACO benchmarks using regional costs, since ACO beneficiaries would make up an ever-increasing percentage of their region’s population, regional costs and an ACO’s benchmark would eventually become one and the same. Then the only way to determine an ACO’s performance would be by its quality score.
Going forward, CMS should make incremental improvements to financial benchmarking, largely because the agency does not have the empirical evidence to do otherwise, and more practically, because whatever CMS decides in its final rule—to be published in April or May—it will only have a limited number of months to implement changes. For these reasons CMS should conduct ongoing rule making to improve establishing, updating, and resetting ACO benchmarks, however incrementally the evidence evolves. CMS should also ensure MSSP program data, in its entirety, is quickly made available to the provider community and the public, with appropriate patient confidentiality safeguards. Currently, the ACO community and interested stakeholders and scholars have a very limited ability to evaluate the program. The ACO program may not make sense for everyone, but everyone ought to be able to make sense of it.Author Affiliation: National Associations of ACOs, Washington, DC.
Source of Funding: None.
Author Disclosures: Dr Introcaso reports no conflicts of interest. The views and comments expressed in this article are those of the author and do not necessarily represent those of NAACOS.
Address correspondence to: David Introcaso, PhD, NAACOS, 1301 Pennsylvania Ave, NW, Ste #500, Washington, DC, 20004. E-mail: dintrocaso@naacos.com.References
1. Medicare Program; Medicare Shared Savings Program: Accountable Care Organizations: a proposed rule by the Centers for Medicare & Medicaid Services. Federal Register website. https://www.federalregister.gov/articles/2014/12/08/2014-28388/medicare-program-medicare-shared-savings-program-accountable-care-organizations. Published December 8, 2014. Accessed February 22, 2015.
2. Medicare Program; Medicare Shared Savings Program: Accountable Care Organizations: a rule by the Centers for Medicare & Medicaid Services. Federal Register website. https://www.federalregister.gov/articles/2011/11/02/2011-27461/medicare-program-medicare-shared-savings-program-accountable-care-organizations. Published November 2, 2011. Accessed February 22, 2015.
3. Scott Heiser, et al. Unpacking the Medicare Shared Savings proposed rule: geography and policy. Health Affairs Blog website. http://healthaffairs.org/blog/2015/01/22/unpacking-the-medicare-shared-savings-proposed-rule-geography-and-policy/. Published January 22, 2015. Accessed February 22, 2015.
4. MedPAC’s letter to Marilyn Tavenner. MedPAC website. http://www.medpac.gov/documents/comment-letters/medpac-comment-on-cms-s-medicare-shared-savingsprogram-accountable-care-organizations-proposed-rule.pdf?sfvrsn=0. Published February 2, 2015. Accessed February 22, 2015.
5. McWilliams JM, et al. Outpatient care patterns and organizational accountability in Medicare. JAMA Intern Med. 2014;174(6):E1-E6. http://archinte.jamanetwork.com/article.aspx?articleid=1861039.
6. Details for title: CMS-1612-FC. CMS website. http://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeeSched/PFS-Federal-Regulation-Notices-Items/CMS-1612-FC.html. Published November 13, 2014. Accessed February 22, 2015.
Overhauling Quality Measurement in the US: Measure What Matters
Oncology Onward: A Conversation With Penn Medicine's Dr Justin Bekelman