View-aligned hypergraph learning for Alzheimer's disease diagnosis with incomplete multi-modality data

Mingxia Liu, Jun Zhang, Pew Thian Yap, Dinggang Shen

Research output: Contribution to journalArticle

46 Citations (Scopus)

Abstract

Effectively utilizing incomplete multi-modality data for the diagnosis of Alzheimer's disease (AD) and its prodrome (i.e., mild cognitive impairment, MCI) remains an active area of research. Several multi-view learning methods have been recently developed for AD/MCI diagnosis by using incomplete multi-modality data, with each view corresponding to a specific modality or a combination of several modalities. However, existing methods usually ignore the underlying coherence among views, which may lead to sub-optimal learning performance. In this paper, we propose a view-aligned hypergraph learning (VAHL) method to explicitly model the coherence among views. Specifically, we first divide the original data into several views based on the availability of different modalities and then construct a hypergraph in each view space based on sparse representation. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to capture coherence among views. We further assemble the class probability scores generated from VAHC, via a multi-view label fusion method for making a final classification decision. We evaluate our method on the baseline ADNI-1 database with 807 subjects and three modalities (i.e., MRI, PET, and CSF). Experimental results demonstrate that our method outperforms state-of-the-art methods that use incomplete multi-modality data for AD/MCI diagnosis.

Original languageEnglish
Pages (from-to)123-134
Number of pages12
JournalMedical Image Analysis
Volume36
DOIs
Publication statusPublished - 2017 Feb 1

Fingerprint

Alzheimer Disease
Learning
Magnetic resonance imaging
Labels
Fusion reactions
Availability
Databases
Research
Cognitive Dysfunction

Keywords

  • Alzheimer's disease
  • Classification
  • Incomplete data
  • Multi-modality

ASJC Scopus subject areas

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging
  • Computer Vision and Pattern Recognition
  • Health Informatics
  • Computer Graphics and Computer-Aided Design

Cite this

View-aligned hypergraph learning for Alzheimer's disease diagnosis with incomplete multi-modality data. / Liu, Mingxia; Zhang, Jun; Yap, Pew Thian; Shen, Dinggang.

In: Medical Image Analysis, Vol. 36, 01.02.2017, p. 123-134.

Research output: Contribution to journalArticle

@article{470d80de684d4f69b601e3a673fb973d,
title = "View-aligned hypergraph learning for Alzheimer's disease diagnosis with incomplete multi-modality data",
abstract = "Effectively utilizing incomplete multi-modality data for the diagnosis of Alzheimer's disease (AD) and its prodrome (i.e., mild cognitive impairment, MCI) remains an active area of research. Several multi-view learning methods have been recently developed for AD/MCI diagnosis by using incomplete multi-modality data, with each view corresponding to a specific modality or a combination of several modalities. However, existing methods usually ignore the underlying coherence among views, which may lead to sub-optimal learning performance. In this paper, we propose a view-aligned hypergraph learning (VAHL) method to explicitly model the coherence among views. Specifically, we first divide the original data into several views based on the availability of different modalities and then construct a hypergraph in each view space based on sparse representation. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to capture coherence among views. We further assemble the class probability scores generated from VAHC, via a multi-view label fusion method for making a final classification decision. We evaluate our method on the baseline ADNI-1 database with 807 subjects and three modalities (i.e., MRI, PET, and CSF). Experimental results demonstrate that our method outperforms state-of-the-art methods that use incomplete multi-modality data for AD/MCI diagnosis.",
keywords = "Alzheimer's disease, Classification, Incomplete data, Multi-modality",
author = "Mingxia Liu and Jun Zhang and Yap, {Pew Thian} and Dinggang Shen",
year = "2017",
month = "2",
day = "1",
doi = "10.1016/j.media.2016.11.002",
language = "English",
volume = "36",
pages = "123--134",
journal = "Medical Image Analysis",
issn = "1361-8415",
publisher = "Elsevier",

}

TY - JOUR

T1 - View-aligned hypergraph learning for Alzheimer's disease diagnosis with incomplete multi-modality data

AU - Liu, Mingxia

AU - Zhang, Jun

AU - Yap, Pew Thian

AU - Shen, Dinggang

PY - 2017/2/1

Y1 - 2017/2/1

N2 - Effectively utilizing incomplete multi-modality data for the diagnosis of Alzheimer's disease (AD) and its prodrome (i.e., mild cognitive impairment, MCI) remains an active area of research. Several multi-view learning methods have been recently developed for AD/MCI diagnosis by using incomplete multi-modality data, with each view corresponding to a specific modality or a combination of several modalities. However, existing methods usually ignore the underlying coherence among views, which may lead to sub-optimal learning performance. In this paper, we propose a view-aligned hypergraph learning (VAHL) method to explicitly model the coherence among views. Specifically, we first divide the original data into several views based on the availability of different modalities and then construct a hypergraph in each view space based on sparse representation. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to capture coherence among views. We further assemble the class probability scores generated from VAHC, via a multi-view label fusion method for making a final classification decision. We evaluate our method on the baseline ADNI-1 database with 807 subjects and three modalities (i.e., MRI, PET, and CSF). Experimental results demonstrate that our method outperforms state-of-the-art methods that use incomplete multi-modality data for AD/MCI diagnosis.

AB - Effectively utilizing incomplete multi-modality data for the diagnosis of Alzheimer's disease (AD) and its prodrome (i.e., mild cognitive impairment, MCI) remains an active area of research. Several multi-view learning methods have been recently developed for AD/MCI diagnosis by using incomplete multi-modality data, with each view corresponding to a specific modality or a combination of several modalities. However, existing methods usually ignore the underlying coherence among views, which may lead to sub-optimal learning performance. In this paper, we propose a view-aligned hypergraph learning (VAHL) method to explicitly model the coherence among views. Specifically, we first divide the original data into several views based on the availability of different modalities and then construct a hypergraph in each view space based on sparse representation. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to capture coherence among views. We further assemble the class probability scores generated from VAHC, via a multi-view label fusion method for making a final classification decision. We evaluate our method on the baseline ADNI-1 database with 807 subjects and three modalities (i.e., MRI, PET, and CSF). Experimental results demonstrate that our method outperforms state-of-the-art methods that use incomplete multi-modality data for AD/MCI diagnosis.

KW - Alzheimer's disease

KW - Classification

KW - Incomplete data

KW - Multi-modality

UR - http://www.scopus.com/inward/record.url?scp=84997638279&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84997638279&partnerID=8YFLogxK

U2 - 10.1016/j.media.2016.11.002

DO - 10.1016/j.media.2016.11.002

M3 - Article

C2 - 27898305

AN - SCOPUS:84997638279

VL - 36

SP - 123

EP - 134

JO - Medical Image Analysis

JF - Medical Image Analysis

SN - 1361-8415

ER -