Lip-reading techniques have shown bright prospects for speech recognition under noisy environments and for hearing-impaired listeners. We aim to solve two important issues regarding lip reading: (1) how to extract discriminative lip motion features and (2) how to establish a classifier that can provide promising recognition accuracy for lip reading. For the first issue, a projection local spatiotemporal descriptor, which considers the lip appearance and motion information at the same time, is utilized to provide an efficient representation of a video sequence. For the second issue, a kernel extreme learning machine (KELM) based on the single-hidden-layer feedforward neural network is presented to distinguish all kinds of utterances. In general, this method has fast learning speed and great robustness to nonlinear data. Furthermore, quantum-behaved particle swarm optimization with binary encoding is introduced to select the appropriate feature subset and parameters for KELM training. Experiments conducted on the AVLetters and OuluVS databases show that the proposed lip-reading method achieves a superior recognition accuracy compared with two previous methods.