In many practical applications, we frequently face the awkward problem in which an image classifier trained in a scenario is difficult to use in a new scenario. Traditionally, the probability inference-based methods are used to solve this problem. From the point of image representation, we propose an approach for domain adaption of image classification. First, all source samples are supposed to form the dictionary. Then, we encode the target sample by combining this dictionary and the local geometric information. Based on this new representation, called target nearest-neighbor representation, image classification can obtain good performance in the target domain. Our core contribution is that the nearest-neighbor information of the target sample is technically exploited to form more robust representation. Experimental results confirm the effectiveness of our method.