New artificial intelligence algorithms can target lung cancer tumors for radiation therapy just as well as an expert radiation oncologist—but can do it 75% to 96% faster.
These AI algorithms were not developed by radiation oncologists or even IT experts with any medical background after years of work but by data scientists, in only 10 weeks, as part of a crowdsourcing innovation challenge.
A team of researchers from the Dana-Farber Cancer Institute, Brigham and Women’s Hospital, Harvard Catalyst, Harvard Business School and the Laboratory for Innovation Science at Harvard collaborated with network and crowdsourcing platform Topcoder to run the 10-week prize-based contest calling on data scientists to develop AI-based solutions to address the task of tumor segmentation and replicate the accuracy of an expert radiation oncologist.
The AI algorithms that were developed performed the tumor segmentation tasks at rates between 15 seconds per scan to two minutes per scan compared to radiation oncologists performing manual segmentation at an average of eight minutes per scan, Raymond Mak, M.D., of Brigham and Women’s Hospital and Dana-Farber Cancer Institute, told FierceHealthcare.
This opens up the potential for these AI solutions to improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings, according to physicians, Mak said.
“The ability to rapidly develop high-performing algorithms for tumor segmentation, and doing so in a cost-efficient way, has the potential to markedly improve how we deliver lung cancer therapy,” Mak said. The results of the challenge were presented in a study published Thursday in JAMA Oncology.
Radiation therapy is a critical cancer treatment, but the existing radiation oncologist workforce does not meet growing global demand, according to the study authors. Segmenting tumors, essentially outlining tumors to effectively hit them with radiation without damaging nearby organs, is a time-consuming and resource-intensive task requiring substantial subspecialty training.
“Reducing the time costs associated with tumor segmentation, without losing any of the accuracy, allows clinicians to reallocate their time and attention to fine-tuning therapy and spending more time with patients,” Mak said. “It may also provide a potential solution that addresses the larger skilled oncology workforce crisis experienced by under-resourced health care systems worldwide.”
The ultimate goal is to take these AI algorithms and deploy them in the clinical setting.
“This kind of tool also opens the door to innovative applications in the areas of clinical training, quality assurance, and multi-institutional research trials," said Eva Guinan, M.D., professor of radiation oncology at Dana-Farber Cancer Institute and Harvard Medical School and the paper’s senior author.
The next steps include validating the AI algorithms with a crowdsourcing approach to get feedback from radiation oncology experts and then developing prospective trials to compare the performance of AI with human backups, Mak said.
“We’re not at a point where we can go ‘hands-free’. It’s going to be an AI-plus-human approach to see if it improves task efficiency and quality of the segmentation and whether it can improve outcomes,” Mak said. “If we’re more consistent and can reduce the potential of human error and then looking at the outcomes after that—do we improve tumor control? Do we improve human performance?”
Mak envisions that these AI algorithms could potentially be deployed in clinics with less experienced clinicians or in under-resourced areas, such as regions in Africa, for example, to improve radiation therapy and cancer care quality.
Crowdsourcing to solve medical problems
The challenge contest also highlights the potential to use crowdsourcing to solve medical problems in a much faster time frame.
"The traditional academic approach would be to get grant funding, hire one or two students or post-doctorate fellows, put on a project for a few years, and get one solution at the end of the day,” Mak said.
The data scientists competing in the challenge were given a data set including computed tomographic (CT) scans and lung tumor segmentations generated by a radiation oncology expert. The CT scans came from 461 patients, with 77,942 images in total and 8,144 images with a tumor present. Contestants also were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms to try to replicate the radiation oncologists’ ability.
The total prize pool was $55,000.
Mike Morris, CEO of Topcoder, told FierceHealthcare the use of a crowdsourcing platform pulls in expertise from outside the traditional pool of academic problem-solvers. “With a data science problem as complicated as this one you need minds to come together and try different approaches. You need diversity of thought, approach, and backgrounds to come up with innovative solutions,” he said.
As healthcare has become increasingly digitalized with large data sets and with some areas like radiation oncology requiring complex computational tasks, it opens up the potential to use data scientists.
“It’s a remarkable story of taking data scientists without domain expertise in medicine, putting a medical problem in front of them, giving them tools to understand the problem, give them feedback and then, at the end of the day, producing a set of algorithms and prototypes that we can put together and refine to apply to this problem,” Mak said.
Morris sees precision medicine as one area in healthcare where crowdsourcing with a network of data scientists could have a huge role to play.
“I think that algorithms, data science, and AI are going to be key to [precision medicine]. Even today approaches in precision medicine that are technically available and working are not feasible because of the amount of time, data and processing power required to make them work. The laws of physics allow us to solve those problems, it’s just a matter of time,” he said.