Extreme learning machine (ELM) is a non-iterative algorithm for training single-hidden layer feedforward neural network (SLFN). ELM has been shown to have good generalization performance and faster learning speed than conventional gradient-based learning algorithms. However, due to the random determination of the hidden neuron parameters (i.e., input weights and biases) ELM may require a large number of neurons in the hidden layer. In this paper, the original harmony search (HS) and its variants, namely, improved harmony search (IHS), global-best harmony search (GHS), and intelligent tuned harmony search (ITHS) are used to optimize the input weights and hidden biases of ELM. The output weights are analytically determined using the Moore–Penrose (MP) generalized inverse. The performance of the hybrid approaches is tested on several benchmark classification problems. The simulation results show that the integration of HS algorithms with ELM has obtained compact network architectures with good generalization performance.