Commentary

Video

Will Shapiro Shares How Artificial Intelligence Is Benefiting Oncology Care

Will Shapiro, vice president of data science at Flatiron Health, discusses how the practice is using large language models (LLMs) to increase efficiency, while ensuring there are frameworks set for safety and quality.

Machine learning (ML) is a very powerful artificial intelligence (AI) tool, and there needs to be guidelines to ensure patient safety and quality of care, says Will Shapiro, vice president of data science at Flatiron Health.

Transcript

How is Flatiron Health using AI in oncology care?

We're very focused on what the use case is. One of the things that that my team does a lot of is use and build machine learning algorithms to read through unstructured documents, because there's an enormous amount of really valuable information that isn't captured in a structured way routinely. Things like a patient's stage, biomarker status, even diagnosis date; these are really critical variables, especially for researchers. That traditionally has always had to be manually extracted.

Our mission as a company, part of it is to learn from the experience of every person with cancer. And that's something that really motivates me every day. Machine learning is a very powerful tool in that context.

For the task of using ML to read patient charts, we have a pretty lengthy evaluation framework that we've developed that looks at things like bias across many different strata. So, age, race, gender, ethnicity, location. But then, also says, “Okay, if we've generated something with ML, and we've generated it using humans, do they replicate in the same way when you're asking an analytic question or a research question?”

On the other hand, at the at the point of care, we've started to use machine learning more recently. So, one of the things I'm really excited about is we have a new feature that uses LLMs [large language models] to map regimens, authored by practices to NCCN [National Comprehensive Cancer Network] guidelines. And that has a totally different framework for thinking about evaluation.

One of the things this gets at is that with this really powerful new generation of LLMs and gen-AI [generative AI] tools that can do so many different things, you really need to think about what the use case is, and then think about what the appropriate evaluation framework for safety and quality within that specific context is, because it might be very different. It's very different to suggest a list of options than to say, “Here's what you should do.” Those have radically different implications. And the way that the output of an algorithm is contextualized within someone's workflow is a big part of this. And being told to do something is different than being presented with an array of options and the evidence that supported each of them.

Related Videos
Roberto Salgado, MD.
Keith Ferdinand, MD, professor of medicine, Gerald S. Berenson chair in preventative cardiology, Tulane University School of Medicine
Screenshot of an interview with Shaun P. McKenzie, MD
Hans Lee, MD
Don M. Benson, MD, PhD, James Cancer Hospital
Picture of San Diego skyline with words ASH Annual Meeting 2024 and health icons overlaid on the bottom
Robin Glasco, MBA
Joshua K. Sabari, MD, NYU Langone Perlmutter Cancer Center
Kara Kelly, MD, chair of pediatrics, Roswell Park Oishei Children's Cancer and Blood Disorders Program
Hans Lee, MD
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo