Fetal brain tissue annotation and segmentation challenge results

Kelly Payette*, Hongwei Bran Li, Priscille de Dumast, Roxane Licandro, Hui Ji, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Hao Liu, Yuchen Pei, Lisheng Wang, Ying Peng, Juanying Xie, Huiquan Zhang, Guiming Dong, Hao Fu, Guotai Wang, Zun Hyan Rieu, Donghyeon Kim, Hyun Gi KimDavood Karimi, Ali Gholipour, Helena R. Torres, Bruno Oliveira, João L. Vilaça, Yang Lin, Netanell Avisdris, Ori Ben-Zvi, Dafna Ben Bashat, Lucas Fidon, Michael Aertsen, Tom Vercauteren, Daniel Sobotka, Georg Langs, Mireia Alenyà, Maria Inmaculada Villanueva, Oscar Camara, Bella Specktor Fadida, Leo Joskowicz, Liao Weibin, Lv Yi, Li Xuesong, Moona Mazher, Abdul Qayyum, Domenec Puig, Hamza Kebiri, Zelin Zhang, Xinyi Xu, Dan Wu, Kuanlun Liao, Yixuan Wu, Jintai Chen, Yunzhi Xu, Li Zhao, Lana Vasung, Bjoern Menze, Meritxell Bach Cuadra, Andras Jakab

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.

Original languageAmerican English
Article number102833
Pages (from-to)1-23
Number of pages23
JournalMedical Image Analysis
Volume88
DOIs
StatePublished - Aug 2023

Bibliographical note

Funding Information:
The authors would like to acknowledge funding from the following funding sources: the OPO Foundation, the University Research Priority Project Adaptive Brain Circuits in Development and Learning (AdaBD) of the University of Zürich, the Prof. Dr. Max Cloetta Foundation, the Anna Müller Grocholski Foundation, the Foundation for Research in Science and the Humanities at the UZH, the EMDO Foundation, the Hasler Foundation, the FZK Grant, the Swiss National Science Foundation (project 205321-182602), the Forschungskredit (Grant NO. FK-21-125) from University of Zurich, the ZNZ PhD Grant, the EU H2020 Marie Sklodowska-Curie [765148], Austrian Science Fund FWF [P 35189], Vienna Science and Technology Fund WWTF [LS20-065], and the Austrian Research Fund Grant I3925-B27 in collaboration with the French National Research Agency (ANR). We acknowledge access to the expertise of the CIBM Center for Biomedical Imaging, a Swiss research center of excellence founded and supported by Lausanne University Hospital (CHUV), University of Lausanne (UNIL), Ecole polytechnique fédérale de Lausanne (EPFL), University of Geneva (UNIGE) and Geneva University Hospitals (HUG). We would also like to acknowledge funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement TRABIT No 765148, as well as from core and project funding from the Wellcome [203148/Z/16/Z; 203145Z/16/Z; WT101957], and EP-SRC [NS/A000049/1; NS/A000050/1; NS/A000027/1]. TV is supported by a Medtronic / RAEng Research Chair [RCSRF1819\7\34]. HBL is supported by an Nvidia Academic GPU grant and Forschungskredit (grant No. K-74851-01-01) from the University of Zurich. The authors would also like to thank NVIDIA for providing access to computing resources.

Funding Information:
The authors would like to acknowledge funding from the following funding sources: the OPO Foundation, the University Research Priority Project Adaptive Brain Circuits in Development and Learning (AdaBD) of the University of Zürich, the Prof. Dr. Max Cloetta Foundation, the Anna Müller Grocholski Foundation, the Foundation for Research in Science and the Humanities at the UZH, the EMDO Foundation, the Hasler Foundation, the FZK Grant, the Swiss National Science Foundation (project 205321-182602 ), the Forschungskredit (Grant NO. FK-21-125) from University of Zurich, the ZNZ PhD Grant, the EU H2020 Marie Sklodowska-Curie [765148], Austrian Science Fund FWF [P 35189 ], Vienna Science and Technology Fund WWTF [ LS20-065 ], and the Austrian Research Fund Grant I3925-B27 in collaboration with the French National Research Agency (ANR). We acknowledge access to the expertise of the CIBM Center for Biomedical Imaging, a Swiss research center of excellence founded and supported by Lausanne University Hospital (CHUV) , University of Lausanne (UNIL) , Ecole polytechnique fédérale de Lausanne (EPFL), University of Geneva (UNIGE) and Geneva University Hospitals (HUG) . We would also like to acknowledge funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement TRABIT No 765148, as well as from core and project funding from the Wellcome [203148/Z/16/Z; 203145Z/16/Z; WT101957], and EP-SRC [NS/A000049/1; NS/A000050/1; NS/A000027/1]. TV is supported by a Medtronic / RAEng Research Chair [RCSRF1819\7\34]. HBL is supported by an Nvidia Academic GPU grant and Forschungskredit (grant No. K-74851-01-01 ) from the University of Zurich. The authors would also like to thank NVIDIA for providing access to computing resources.

Publisher Copyright:
© 2023

Keywords

  • Congenital disorders
  • Fetal brain MRI
  • Multi-class image segmentation
  • Super-resolution reconstructions

Fingerprint

Dive into the research topics of 'Fetal brain tissue annotation and segmentation challenge results'. Together they form a unique fingerprint.

Cite this