Publication

Article

Evidence-Based Oncology

July 2022
Volume28
Issue 5
Pages: SP284-SP285

ASCO Spotlight With Ravi B. Parikh, MD, MPP: Can Nudges Increase the Number of Serious Illness Conversations in Community Oncology?

Author(s):

Ravi B. Parikh, MD, MPP, assistant professor, Department of Medical Ethics and Health Policy and Medicine, Perelman School of Medicine, University of Pennsylvania, presented long-term results from an experiment with an algorithm designed to prompt oncologists to have serious illness conversations.

Getting oncologists to speak with patients about their care goals at the start of treatment has been a mission of the quality care movement for years. But matching this with the reality of busy clinic schedules has been a challenge, especially among community oncologists, who typically see more patients than their counterparts in academic medicine.


Yet if conversations about serious illness (SI) or end-of-life (EOL) care are to translate into hospice referrals—or fewer costly, toxic late treatments that won’t work—community practice is where they must happen, according to Ravi B. Parikh, MD, MPP, an assistant professor in the Department of Medical Ethics and Health Policy and Medicine at Perelman School of Medicine, University of Pennsylvania in Philadelphia.


Parikh spoke with Evidence-Based Oncology™ (EBO) during the 2022 American Society of Clinical Oncology (ASCO) Annual Meeting, just before he presented long-term findings from a clinical trial. The trial tested a protocol that prompted oncologists to have SI conversations with patients who were predicted to have a 6-month mortality risk, based on inputs from their electronic heath record (EHR) to an algorithm.1


This type of work applies nudge theory, a Nobel Prize–winning concept rooted in behavioral economics that calls for the use of positive reinforcement to influence decision-making.2 Many health-related applications have involved patient behavior; Parikh’s team applies the concepts to physician behavior.


Parikh’s team at Penn Medicine deployed the protocol, and data first published in JAMA Oncology showed that after oncologists received artificial intelligence–driven prompts at the start of their shift, conversations increased 4-fold, from a rate of 3.4% to 13.5%.3 As Parikh explained during his ASCO presentation, the protocol was layered with a “kitchen sink” of motivational tools:

  • Each week, oncologists were emailed a secure list of how their SI conversations compared with those of their peers.
  • Oncologists were sent a secure list each week of high-risk patients on their upcoming schedule who, based on the algorithm, were deemed candidates for SI conversations. The doctors were asked to preselect up to 6 patients per week for conversations.
  • Opt-out texts were sent ahead of high-risk patient encounters, on the morning of the clinic visit.

Long-term results shared at ASCO showed that after the initial 16-week study period, the conversation rate in the intervention group held steady at 12% through 40 weeks. Of interest to payers, there was a decrease in the use of systemic therapy at EOL compared with the control group (6.8% vs 9.3%; adjusted odds ratio, 0.27; 95% CI, 0.12-0.63; P = .002). However, there was no difference in hospice enrollment or length of hospital stay.1

Parikh discussed where things stand today with the protocol and what new steps are planned. This interview has been lighted edited for clarity.

EBO: Since you collected your initial data, has this protocol been fully implemented at Penn?

Parikh: We’ve rolled it out across the entire cancer service line in our 12 medical oncology clinics. It’s been deployed system-wide because of its durable success in the rates of conversations. We’ve also sought to expand upon the intervention, both by refining the algorithm and also by incorporating other elements, including having patients prefill out their own SI conversation at home—by thinking about certain aspects of care goals and trade-offs—and reporting those to clinicians. I think there’s a lot of elements of how machine learning and behavioral nudges can merge together to create long-lasting change in care delivery. And I think our trial is one of the first ones to test those concepts.

EBO: We’ve all become accustomed to “nudge” applications in everyday life—anyone who uses a Peloton or tries Noom has experienced this. Is the nudge experience not as foreign to physicians as it might have been 10 years ago?

Parikh: Overall, there was receptivity toward the idea, partially because there had been a lot of education before we started the trial on the importance of SI communication. So doctors knew this was important. What the nudge does is create a nonfinancial, nonaggressive mechanism of trying to change the architecture of how doctors make their choices—to make it easier to do the right thing. Prior to the study, we were asking doctors to look at their panel and try to identify those patients who they felt might be at risk for death or symptom [worsening]. That part requires a lot of mental energy for the doctor; oftentimes, doctors simply don’t have time to do it. And so they just don’t do it, they don’t have the conversation, or they have conversations too late.


What we’re trying to make easier here is not only identifying patients—using an algorithm-based strategy—but also reminding doctors on the morning of the clinic visit that a conversation may be appropriate. That takes away not only the mental energy of having to identify the patients, but also remembering to do it on the day of clinic. We think that’s a replicable strategy to remove some of the cognitive burden of some of these decisions. Of course, having the conversation is not something an algorithm is going to do. That’s the hardest part, and that’s still on them. And that’s why we didn’t achieve close to 100% of patients getting a conversation.
There’s still a lot to overcome here. But I think with regard to the identification, and the reminder process, this type of strategy is a way that doesn’t necessarily infringe on the doctor’s autonomy.

EBO: Which subgroups of doctors did the best?

Parikh: We actually just put out [findings from] a study on this. [Those who had more conversations were] younger doctors, doctors who tended to practice in academic settings, and doctors who saw a lower number of patients on a given day. And the fourth group were doctors who had higher baseline rates of conversation, so they were bought in. Those folks tended to respond disproportionately and actually accounted for most of our intervention. In fact, busier doctors in general oncology practices or doctors who are not having a lot of conversations at baseline…still had some increases due to intervention, but not nearly as much as the other groups. So those doctors are where the phenotyping work is really important.

EBO: Did those on the lower end of the improvement spectrum get better over time? Will this process be similar to a New Year’s resolution, where you have to periodically review their data, see where they improved, and celebrate the wins, instead of just saying, “You were on the low end.” What is the approach for this group?

Parikh: This is a great question. We have a graph that’s part of the presentation that exactly speaks to this: the rates of conversations for the average clinician went up by almost 6-fold in the first 4 to 5 months of the intervention. And then in the follow-up period [they] decreased, just as any behavioral intervention does.


Now, the difference here is that when we relied on education alone, the rates went basically back to baseline; they settled at around 4 to 5 times the baseline. So there was a new decrease during the course of the follow-up period, but they still settled at a much higher rate than the baseline rate. And in fact, the last 2 months of our trial were during [the] COVID-19 [pandemic]. And rates of conversations actually went up [then], largely because I think doctors were more in tune with wanting to do what their patients wanted to do during COVID-19 and trying to avoid treatment when it was unnecessary.


The algorithm in the intervention helped there because it pointed toward the type of patients that were most likely to potentially do poorly with COVID-19 [infection], and it targeted the intervention there. We were expecting because people weren’t coming into clinic that the intervention rates would just fall off a cliff….

EBO: What was the effect of having these conversations via telemedicine?

Parikh: It’s hard to have these conversations via telemedicine. Because of the intervention, the doctors had some experience doing the conversations and may have been a little more receptive toward having [them] over telemedicine, because otherwise, we’re not usually trained to do conversations….One key aspect I think is important to mention is that normally, doctors and nurse practitioners and physician assistants wait until something goes wrong to have these conversations. So when someone has a bad scan result, or someone’s lab tests are going in the wrong direction, that’s when we usually have the conversations. And that’s not an early conversation; that’s a delayed conversation.


We’re forcing patients to not only process the weight of bad news, but also discuss a lot of really heavy topics. What we’re trying to do with this is say, “Let’s not wait for a bad event. Let’s use some of our predictive algorithm tools to identify the right patients and have these conversations early on, so that patients can be in the right mindset or not necessarily think about a million things.”

EBO: What are your next steps?

Parikh: There are 3 key areas that we are focused on. The first is disseminating this outside of a Penn Medicine setting; we’re running randomized trials now using algorithm-based strategies in community oncology settings, where most patients receive their cancer care, to try to test the same concept.


The second thing that we’re doing, as I mentioned, is trying to refine the behavioral intervention a bit more. Doctors are busy—they have a lot on their plate and we are trying to push doctors to do more and more and more without making their lives easier. So allowing patients to think about some of this at home and allowing high-risk patients to prefill in their own conversation so that doctors can, if it makes sense, jump in on a particular aspect of the conversation is another thing we’re trying.


And the last thing we’re trying involves refining the algorithm to make it more personalized on a cancer-specific level. We’re using other sources of data, particularly patient-reported data, which we’ve started to collect a lot more these days. We feel it’s important to get the algorithm right for situations like this.

EBO: Given what you’re saying, you’ll be working more with community oncology physicians. Might one of the algorithm’s cut points be an academic vs community setting?


Parikh: Community oncology practices are busier. So perhaps they’re going to respond less [than other settings]. But the other counterpoint is that in academic settings, when people are getting second opinions and when people are coming in for perhaps 1- or 2-time visits and going back to the community, maybe there’s not as much of an incentive to have a conversation, even with a higher-risk patient.


With the community doctors, these are the doctors that are trusted that patients are seeing oftentimes for years and years. Perhaps there’s more receptivity to bring up the conversation when it’s necessary. So I’m not sure which one of those is going to win out. We’ve seen a lot of enthusiasm about this from community practices, particularly community practices that have embedded palliative care in their own practice because they’re attuned to the idea that these folks need better conversations on EOL care. But doctors need a little bit of a nudge to remind them to do the right thing and have the conversation when necessary. The algorithm can sometimes relieve some of that cognitive burden.

EBO: In terms of scaling this, right now you’re in an academic center. If this concept were applied broadly, who would pay for it? How and from where would you be reimbursed?

Parikh: There’s been a lot of work on the entrepreneurial side around partnerships with payers….There’s a high cost of care near EOL, but it’s more that there’s unwarranted and unwanted care near the end. Even though I think [reimbursement could come] from the payer side, there’s alignment of incentives toward trying to reduce costs and ensure that the patients who are getting chemo[therapy] are truly the ones who would benefit. An intervention like this can be helpful.


Several interventions have tested similar algorithm-based palliative care, or algorithm-based supportive care, to try to do this in concert with payers. I really think there’s going to be an emphasis more on the practices, particularly those practices that are participating in value-based contracts and that may be judged by performance indicators, including care near EOL….I think there’s going to be a lot more likelihood of success here, first, because practices have more granular and up-to-date data to make the algorithm better, and second, because doctors don’t want to hear from their payer that they need to have a conversation. Doctors don’t want to hear that they need to refer [a patient] to palliative care. It’s better to have doctors make that decision for themselves. When we can embed something like this into practices that are committed to improving EOL, palliative supportive care, I think that’s going to be the biggest alignment. 

References
1. Parikh RB, Zhang Y, Small D, et al. Long-term effect of machine learning–triggered behavioral nudges on serious illness communication and end-of-life outcomes among patients with cancer: a randomized clinical trial. J Clin Oncol. 2022;40(suppl 16):109. doi:10.1200/JCO.2022.40.16_suppl109
2. Patel MS, Volpp KG, Asch DA. Nudge units to improve the delivery of health care. N Engl J Med. 2018;378(3):214-216. doi:10.1056/NEJMp1712984
3. Manz CR, Parikh RB, Small DS, et al. Effect of integrating machine learning mortality estimates with behavioral nudges to clinicians on serious illness conversations among patients with cancer: a stepped-wedge cluster randomized clinical trial. JAMA Oncol. 2020;6(12):e204759. Published correction appears in JAMA Oncol. 2022;8(4):648.

Related Videos
Wanmei Ou, PhD, vice president of product, data analytics, and AI at Ontada
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Corey McEwen, PharmD, MS
dr linda bosserman
dr andrew leitner
Glenn Balasky during a video interview
dr joseph alvarnas
dr joseph alvarnas
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo