Learning complementary information from multi-modality data often improves diagnostic performance of brain disorders. However, it is challenging to obtain this complementary information when the data are incomplete. Existing methods, such as low-rank matrix completion (which imputes the missing data) and multi-task learning (which restructures the problem into the joint learning of multiple tasks, with each task associated with a subset of complete data), simply concatenate features from different modalities without considering their underlying correlations. Furthermore, most methods conduct multi-modality fusion and prediction model learning in separated steps, which may render to a sub-optimal solution. To address these issues, we propose a novel diagnostic model that integrates missing data recovery, latent space learning and prediction model learning into a unified framework. Specifically, we first recover the missing modality by maximizing the dependency among different modalities. Then, we further exploit the modality correlation by projecting different modalities into a common latent space. Besides, we employ an l1 -norm to our loss function to mitigate the influence of sample outliers. Finally, we map the learned latent representation into the label space. All these tasks are learned iteratively in a unified framework, where the label information (from the training samples) can also inherently guide the missing modality recovery. Experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset show the effectiveness of our method.