Phys Med. 2020 Sep 17;78:93-100. doi: 10.1016/j.ejmp.2020.09.004. Online ahead of print.
PURPOSE: Deep learning has shown great efficacy for semantic segmentation. However, there are difficulties in the collection, labeling and management of medical imaging data, because of ethical complications and the limited number of imaging studies available at a single facility. This study aimed to find a simple and low-cost method to increase the accuracy of deep learning semantic segmentation for radiation therapy of prostate cancer.
METHODS: In total, 556 cases with non-contrast CT images for prostate cancer radiation therapy were examined using a two-dimensional U-Net. Initially, all slices were used for the input data. Then, we removed slices of the cranial portions, which were beyond the margins of the bladder and rectum. Finally, the ground truth labels for the bladder and rectum were added as channels to the input for the prostate training dataset.
RESULTS: The highest mean dice similarity coefficients (DSCs) for each organ in the test dataset of 56 cases were 0.85 ± 0.05, 0.94 ± 0.04 and 0.85 ± 0.07 for the prostate, bladder and rectum, respectively. Removal of the cranial slices from the original images significantly increased the DSC of the rectum from 0.83 ± 0.09 to 0.85 ± 0.07 (p < 0.05). Adding bladder and rectum information to prostate training without removing the slices significantly increased the DSC of the prostate from 0.79 ± 0.05 to 0.85 ± 0.05 (p < 0.05).
CONCLUSIONS: These cost-free approaches may be useful for new applications, which may include updated models and datasets. They may be applicable to other organs at risk (OARs) and clinical targets such as elective nodal irradiation.