Purpose: To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods. Methods: A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established "no-new-Net" framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test. Results: Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10-7, 3 × 10-4, 4 × 10-2, respectively), and for the omental lesions on the evaluation set (p = 1 × 10-3). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions. Conclusion: Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions. Relevance statement: Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines. Key points: • The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented. • Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists. • Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines.

Deep learning-based segmentation of multisite disease in ovarian cancer / Buddenkotte, Thomas; Rundo, Leonardo; Woitek, Ramona; Escudero Sanchez, Lorena; Beer, Lucian; Crispin-Ortuzar, Mireia; Etmann, Christian; Mukherjee, Subhadip; Bura, Vlad; Mccague, Cathal; Sahin, Hilal; Pintican, Roxana; Zerunian, Marta; Allajbeu, Iris; Singh, Naveena; Sahdev, Anju; Havrilesky, Laura; Cohn, David E; Bateman, Nicholas W; Conrads, Thomas P; Darcy, Kathleen M; Maxwell, G Larry; Freymann, John B; Öktem, Ozan; Brenton, James D; Sala, Evis; Schönlieb, Carola-Bibiane. - In: EUROPEAN RADIOLOGY EXPERIMENTAL. - ISSN 2509-9280. - 7:1(2023), p. 77. [10.1186/s41747-023-00388-z]

Deep learning-based segmentation of multisite disease in ovarian cancer

Zerunian, Marta
Data Curation
;
2023

Abstract

Purpose: To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods. Methods: A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established "no-new-Net" framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test. Results: Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10-7, 3 × 10-4, 4 × 10-2, respectively), and for the omental lesions on the evaluation set (p = 1 × 10-3). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions. Conclusion: Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions. Relevance statement: Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines. Key points: • The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented. • Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists. • Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines.
2023
Deep learning; Omentum; Ovarian Neoplasms; Pelvis; Tomography (x-ray computed)
01 Pubblicazione su rivista::01a Articolo in rivista
Deep learning-based segmentation of multisite disease in ovarian cancer / Buddenkotte, Thomas; Rundo, Leonardo; Woitek, Ramona; Escudero Sanchez, Lorena; Beer, Lucian; Crispin-Ortuzar, Mireia; Etmann, Christian; Mukherjee, Subhadip; Bura, Vlad; Mccague, Cathal; Sahin, Hilal; Pintican, Roxana; Zerunian, Marta; Allajbeu, Iris; Singh, Naveena; Sahdev, Anju; Havrilesky, Laura; Cohn, David E; Bateman, Nicholas W; Conrads, Thomas P; Darcy, Kathleen M; Maxwell, G Larry; Freymann, John B; Öktem, Ozan; Brenton, James D; Sala, Evis; Schönlieb, Carola-Bibiane. - In: EUROPEAN RADIOLOGY EXPERIMENTAL. - ISSN 2509-9280. - 7:1(2023), p. 77. [10.1186/s41747-023-00388-z]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1695388
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact