Sketching has become fashionable with the increasing availability of touch-screens on portable devices. It is typically used for rendering the visual world, automatic sketch style recognition and abstraction, sketch-based image retrieval (SBIR), and sketch-based perceptual grouping. How to automatically generate a sketch from a real image remains an open question. We propose a convolutional neural network-based model, named SG-Net, to generate sketches from natural images. SG-Net is trained to learn the relationship between images and sketches and thus makes full use of edge information to generate a rough sketch. Then, mathematical morphology is further utilized as a postprocess to eliminate the redundant artifacts in the generated sketches. In addition, in order to increase the diversity of generated sketches, we introduce thin plate splines to generate more sketches with different styles. We evaluate the proposed method of sketch generation both quantitatively and qualitatively on the challenging dataset. Our approach achieves superior performance to the established methods. Moreover, we conduct extensive experiments on the SBIR task. The experimental results on the Flickr15k dataset demonstrate that our proposed method leverages the retrieval performance compared with the state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.