A convolutional neural network for total tumor segmentation in [64Cu]Cu-DOTATATE PET/CT of patients with neuroendocrine neoplasms

  • Esben Andreas Carlsen (Creator)
  • Kristian Lindholm (Creator)
  • Amalie Hindsholm (Creator)
  • Mathias Gæde (Creator)
  • Claes Nøhr Ladefoged (Creator)
  • Mathias Loft (Creator)
  • Camilla Bardram Johnbeck (Creator)
  • Seppo Wang Langer (Creator)
  • Peter Oturai (Creator)
  • Ulrich Knigge (Creator)
  • Andreas Kjær (Creator)
  • Flemming Littrup Andersen (Creator)

Dataset

Description

Abstract Background Segmentation of neuroendocrine neoplasms (NENs) in [64Cu]Cu-DOTATATE positron emission tomography makes it possible to extract quantitative measures useable for prognostication of patients. However, manual tumor segmentation is cumbersome and time-consuming. Therefore, we aimed to implement and test an artificial intelligence (AI) network for tumor segmentation. Patients with gastroenteropancreatic or lung NEN with [64Cu]Cu-DOTATATE PET/CT performed were included in our training (n = 117) and test cohort (n = 41). Further, 10 patients with no signs of NEN were included as negative controls. Ground truth segmentations were obtained by a standardized semiautomatic method for tumor segmentation by a physician. The nnU-Net framework was used to set up a deep learning U-net architecture. Dice score, sensitivity and precision were used for selection of the final model. AI segmentations were implemented in a clinical imaging viewer where a physician evaluated performance and performed manual adjustments. Results Cross-validation training was used to generate models and an ensemble model. The ensemble model performed best overall with a lesion-wise dice of 0.850 and pixel-wise dice, precision and sensitivity of 0.801, 0.786 and 0.872, respectively. Performance of the ensemble model was acceptable with some degree of manual adjustment in 35/41 (85%) patients. Final tumor segmentation could be obtained from the AI model with manual adjustments in 5 min versus 17 min for ground truth method, p
Date made available2022
Publisherfigshare

Cite this