Segmenting hippocampal subfields from 3T MRI with multi-modality images

Zhengwang Wu, Yaozong Gao, Feng Shi, Guangkai Ma, Valerie Jewells, Dinggang Shen

Research output: Contribution to journalArticle

5 Citations (Scopus)

Abstract

Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal subfields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI.

Original languageEnglish
Pages (from-to)10-22
Number of pages13
JournalMedical Image Analysis
Volume43
DOIs
Publication statusPublished - 2018 Jan 1

Fingerprint

Magnetic resonance imaging
Magnetic Resonance Imaging
Classifiers
Image resolution
Labels
Brain
Learning
Testing
Experiments
Datasets

Keywords

  • Auto-context model
  • Hippocampal subfields segmentation
  • Multi-modality features
  • Structured random forest

ASJC Scopus subject areas

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging
  • Computer Vision and Pattern Recognition
  • Health Informatics
  • Computer Graphics and Computer-Aided Design

Cite this

Segmenting hippocampal subfields from 3T MRI with multi-modality images. / Wu, Zhengwang; Gao, Yaozong; Shi, Feng; Ma, Guangkai; Jewells, Valerie; Shen, Dinggang.

In: Medical Image Analysis, Vol. 43, 01.01.2018, p. 10-22.

Research output: Contribution to journalArticle

Wu, Zhengwang ; Gao, Yaozong ; Shi, Feng ; Ma, Guangkai ; Jewells, Valerie ; Shen, Dinggang. / Segmenting hippocampal subfields from 3T MRI with multi-modality images. In: Medical Image Analysis. 2018 ; Vol. 43. pp. 10-22.
@article{dea0bc9c03ed496ca97b383b09b3e738,
title = "Segmenting hippocampal subfields from 3T MRI with multi-modality images",
abstract = "Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal subfields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI.",
keywords = "Auto-context model, Hippocampal subfields segmentation, Multi-modality features, Structured random forest",
author = "Zhengwang Wu and Yaozong Gao and Feng Shi and Guangkai Ma and Valerie Jewells and Dinggang Shen",
year = "2018",
month = "1",
day = "1",
doi = "10.1016/j.media.2017.09.006",
language = "English",
volume = "43",
pages = "10--22",
journal = "Medical Image Analysis",
issn = "1361-8415",
publisher = "Elsevier",

}

TY - JOUR

T1 - Segmenting hippocampal subfields from 3T MRI with multi-modality images

AU - Wu, Zhengwang

AU - Gao, Yaozong

AU - Shi, Feng

AU - Ma, Guangkai

AU - Jewells, Valerie

AU - Shen, Dinggang

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal subfields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI.

AB - Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal subfields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI.

KW - Auto-context model

KW - Hippocampal subfields segmentation

KW - Multi-modality features

KW - Structured random forest

UR - http://www.scopus.com/inward/record.url?scp=85029830880&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85029830880&partnerID=8YFLogxK

U2 - 10.1016/j.media.2017.09.006

DO - 10.1016/j.media.2017.09.006

M3 - Article

C2 - 28961451

AN - SCOPUS:85029830880

VL - 43

SP - 10

EP - 22

JO - Medical Image Analysis

JF - Medical Image Analysis

SN - 1361-8415

ER -