Multi-modality imaging provides complementary information for diagnosis of neurodegenerative disorders such as Alzheimer's disease (AD) and its prodrome, mild cognitive impairment (MCI). In this paper, we propose a kernel-based multi-task sparse representation model to combine the strengths of MRI and PET imaging features for improved classification of AD. Sparse representation based classification seeks to represent the testing data with a sparse linear combination of training data. Here, our approach allows information from different imaging modalities to be used for enforcing class level joint sparsity via multi-task learning. Thus the common most representative classes in the training samples for all modalities are jointly selected to reconstruct the testing sample. We further improve the discriminatory power by extending the framework to the reproducing kernel Hilbert space (RKHS) so that nonlinearity in the features can be captured for better classification. Experiments on Alzheimer's Disease Neuroimaging Initiative database shows that our proposed method can achieve 93.3% and 78.9% accuracy for classification of AD and MCI from healthy controls, respectively, demonstrating promising performance in AD study.