Abstract
Segmentation of the developing fetal brain is an important step in quantitative analyses. However, manual segmentation is a very time-consuming task which is prone to error and must be completed by highly specialized individuals. Super-resolution reconstruction of fetal MRI has become standard for processing such data as it improves image quality and resolution. However, different pipelines result in slightly different outputs, further complicating the generalization of segmentation methods aiming to segment super-resolution data. Therefore, we propose using transfer learning with noisy multi-class labels to automatically segment high resolution fetal brain MRIs using a single set of segmentations created with one reconstruction method and tested for generalizability across other reconstruction methods. Our results show that the network can automatically segment fetal brain reconstructions into 7 different tissue types, regardless of reconstruction method used. Transfer learning offers some advantages when compared to training without pre-initialized weights, but the network trained on clean labels had more accurate segmentations overall. No additional manual segmentations were required. Therefore, the proposed network has the potential to eliminate the need for manual segmentations needed in quantitative analyses of the fetal brain independent of reconstruction method used, offering an unbiased way to quantify normal and pathological neurodevelopment.