Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis,especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e.,extracted from imaging data) in the feature domain,and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However,such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue,we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this,our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain,(2) updates the intrinsic data representation from the refined subject-wise relationships,and (3) verifies the intrinsic data representation on the training data,in order to guarantee an optimal classification on the new testing data. Furthermore,we extend our pGTL to incorporate multi-modal imaging data,to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimer’s disease (AD),Mild Cognitive Impairment (MCI),and Normal Control (NC) subjects are achieved using MRI and PET data.