Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Deep learning improves interpretation of tumors

Science Highlights
October 18, 2021

More precise image analysis aims for better patient care

Media Contacts

NIBIB Communications
nibibpress@mail.nih.gov
301-496-3500
More by this author

NIBIB-funded engineers are using deep learning to differentiate tumor more accurately from normal tissue in positron emission tomography (PET) images. Standard analysis of PET scans define regions with abnormal radiotracer uptake as tumor. The team at Washington University in St. Louis has developed a technique using statistical analysis and deep learning to determine the extent of tumors at their margins. 

deep learning scheme to define tumors in PET images

In the tumor-fraction area map each square is a voxel. The deep learning technique can determine how much of a gray area in each voxel is tumor or normal tissue (see scale on right from 0, no tumor to 1, all tumor). The technique is designed to accurately calculate the volume of tumors on PET images with the aim of improving patient care. Credit: Abhinav Jha, Washington State University in St. Louis.

 

PET images consist of what are known as voxels, which are 3-dimensional pixels in space. Current methods count these voxels as either tumor or normal.

“The key idea is that we don’t just learn if a voxel belongs to the tumor or not,” said team leader Abhinav Jha, Ph.D., assistant professor of biomedical engineering in the McKelvey School of Engineering. “The voxel can be part tumor and part normal. The novelty is that we can estimate how much of the voxel is the tumor.”

The research aims to provide more accurate information about the tumor to guide treatment decisions and improve patient care.

“It’s a quality-of-life issue for patients,” said Jha. “Helping to answer those questions would be satisfying and rewarding.”

The work is reported in the journal Physics in Medicine & Biology [1]. The software is available for non-commercial purposes.

Financial support for this work was provided by the National Institute of Biomedical Imaging and Bioengineering R01 Award (R01-EB031051) and Trailblazer R21 Award (R21-EB024647), and a grant from NVIDIA. The Washington University Center for High Performance Computing provided computational resources for the project. The center is partially funded by NIH grants 1S10RR022984-01A1 and 1S10OD018091-01.

1. A Bayesian approach to tissue-fraction estimation for oncological PET segmentation. Liu Z, Mhlanga JC, Laforest R, Derenoncourt P-R, Siegel BA, Jha AK. Phys Med Biol. 2021 Jun 14;66(12). doi: 10.1088/1361-6560/ac01f4

[This is an update of the original post.]