More precise image analysis aims for better patient care
NIBIB-funded engineers are using deep learning to differentiate tumor more accurately from normal tissue in positron emission tomography (PET) images. Standard analysis of PET scans define regions with abnormal radiotracer uptake as tumor. The team at Washington University in St. Louis has developed a technique using statistical analysis and deep learning to determine the extent of tumors at their margins.
PET images consist of what are known as voxels, which are 3-dimensional pixels in space. Current methods count these voxels as either tumor or normal.
“The key idea is that we don’t just learn if a voxel belongs to the tumor or not,” said team leader Abhinav Jha, Ph.D., assistant professor of biomedical engineering in the McKelvey School of Engineering. “The voxel can be part tumor and part normal. The novelty is that we can estimate how much of the voxel is the tumor.”
The research aims to provide more accurate information about the tumor to guide treatment decisions and improve patient care.
“It’s a quality-of-life issue for patients,” said Jha. “Helping to answer those questions would be satisfying and rewarding.”
The work is reported in the journal Physics in Medicine & Biology [1]. The software is available for non-commercial purposes.
Financial support for this work was provided by the National Institute of Biomedical Imaging and Bioengineering R01 Award (R01-EB031051) and Trailblazer R21 Award (R21-EB024647), and a grant from NVIDIA. The Washington University Center for High Performance Computing provided computational resources for the project. The center is partially funded by NIH grants 1S10RR022984-01A1 and 1S10OD018091-01.
[This is an update of the original post.]