Automatic hepatic tumor segmentation in intra-operative ultrasound: a supervised deep-learning approach.



Training and evaluation of the performance of a supervised deep-learning model for the segmentation of hepatic tumors from intraoperative US (iUS) images, with the purpose of improving the accuracy of tumor margin assessment during liver surgeries and the detection of lesions during colorectal surgeries.


The presented model achieved a DSC of 0.84 (p=0.0037), which is comparable to the expert human raters scores. The model segmented hypoechoic and mixed lesions more accurately (DSC of 0.89 and 0.88, respectively) than hyper- and isoechoic ones (DSC of 0.70 and 0.60, respectively) only missing isoechoic or >20 mm in diameter (8% of the tumors) lesions. The inclusion of extra margins of probable tumor tissue around the lesions in the training ground truth resulted in lower DSCs of 0.75 (p=0.0022).


The model can accurately segment hepatic tumors from iUS images and has the potential to speed up the resection margin definition during surgeries and the detection of lesion in screenings by automating iUS assessment.


In this retrospective study, a U-Net network was trained with the nnU-Net framework in different configurations for the segmentation of CRLM from iUS. The model was trained on B-mode intraoperative hepatic US images, hand-labeled by an expert clinician. The model was tested on an independent set of similar images. The average age of the study population was 61.9 ± 9.9 years. Ground truth for the test set was provided by a radiologist, and three extra delineation sets were used for the computation of inter-observer variability.

More about this publication

Journal of medical imaging (Bellingham, Wash.)
  • Volume 11
  • Issue nr. 2
  • Pages 024501
  • Publication date 01-03-2024

This site uses cookies

This website uses cookies to ensure you get the best experience on our website.