Transfer learning improves supervised image segmentation across imaging protocols

Annegreet van Opbroek, M. Arfan Ikram, Meike W. Vernooij, Marleen de Bruijne

106 Citationer (Scopus)
163 Downloads (Pure)

Abstract

The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.

OriginalsprogEngelsk
TidsskriftIEEE Transactions on Medical Imaging
Vol/bind34
Udgave nummer5
Sider (fra-til)1018-1030
Antal sider13
ISSN0278-0062
DOI
StatusUdgivet - 1 maj 2015

Fingeraftryk

Dyk ned i forskningsemnerne om 'Transfer learning improves supervised image segmentation across imaging protocols'. Sammen danner de et unikt fingeraftryk.

Citationsformater