ChronoMID—neural networks for temporally-varying, hence Chrono, Medical Imaging Data—makes the novel application of cross-modal convolutional neural networks (X-CNNs) to the medical domain. In this paper, we present multiple approaches for incorporating temporal information into X-CNNs and compare their performance in a case study on the classification of abnormal bone remodelling in mice. Previous work developing medical models has predominantly focused on either spatial or temporal aspects, but rarely both. Our models seek to unify these complementary sources of information and derive insights in a bottom-up, data-driven approach. As with many medical datasets, the case study herein exhibits deep rather than wide data; we apply various techniques, including extensive regularisation, to account for this. After training on a balanced set of approximately 70000 images, two of the models—those using difference maps from known reference points—outperformed a state-of-the-art convolutional neural network baseline by over 30pp (> 99% vs. 68.26%) on an unseen, balanced validation set comprising around 20000 images. These models are expected to perform well with sparse data sets based on both previous findings with X-CNNs and the representations of time used, which permit arbitrarily large and irregular gaps between data points. Our results highlight the importance of identifying a suitable description of time for a problem domain, as unsuitable descriptors may not only fail to improve a model, they may in fact confound it.

ChronoMID—Cross-modal neural networks for 3-D temporal medical imaging data / Rakowski, A. G.; Velickovic, P.; Dall'Ara, E.; Lio, P.. - In: PLOS ONE. - ISSN 1932-6203. - 15:2(2020). [10.1371/journal.pone.0228962]

ChronoMID—Cross-modal neural networks for 3-D temporal medical imaging data

Lio P.
2020

Abstract

ChronoMID—neural networks for temporally-varying, hence Chrono, Medical Imaging Data—makes the novel application of cross-modal convolutional neural networks (X-CNNs) to the medical domain. In this paper, we present multiple approaches for incorporating temporal information into X-CNNs and compare their performance in a case study on the classification of abnormal bone remodelling in mice. Previous work developing medical models has predominantly focused on either spatial or temporal aspects, but rarely both. Our models seek to unify these complementary sources of information and derive insights in a bottom-up, data-driven approach. As with many medical datasets, the case study herein exhibits deep rather than wide data; we apply various techniques, including extensive regularisation, to account for this. After training on a balanced set of approximately 70000 images, two of the models—those using difference maps from known reference points—outperformed a state-of-the-art convolutional neural network baseline by over 30pp (> 99% vs. 68.26%) on an unseen, balanced validation set comprising around 20000 images. These models are expected to perform well with sparse data sets based on both previous findings with X-CNNs and the representations of time used, which permit arbitrarily large and irregular gaps between data points. Our results highlight the importance of identifying a suitable description of time for a problem domain, as unsuitable descriptors may not only fail to improve a model, they may in fact confound it.
2020
Bone; Osteoplasty; Finite Element Method
01 Pubblicazione su rivista::01a Articolo in rivista
ChronoMID—Cross-modal neural networks for 3-D temporal medical imaging data / Rakowski, A. G.; Velickovic, P.; Dall'Ara, E.; Lio, P.. - In: PLOS ONE. - ISSN 1932-6203. - 15:2(2020). [10.1371/journal.pone.0228962]
File allegati a questo prodotto
File Dimensione Formato  
Rakowski_ChronoMID—Cross-modal_2020.pdf

accesso aperto

Note: DOI10.1371/journal.pone.0228962
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.42 MB
Formato Adobe PDF
1.42 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1719701
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact