DeepFaceLab, which is also known as AI face swap, is a new task with practical value in computer vision field that has become popular in recent years. As it can replace any face in an existing video with another face whichever you want without revealing flaws, swapping face technology is now becoming more and more welcomed by the entertainment, film and art industries, producing high commercial value. A number of works based on convolutional neural network or generative adversary network have been proposed to extract facial features and accomplish face swap. However, their network architecture can't complete the small face swapping very well and the effect of the generated video is also not ideal. In this paper, we train on the SAEHD model based on a single shot scale-invariant face detector. We use the scale-equitable-face detection framework to ensure that we can extract enough features on different scales for face swap. In addition, according to the effective receptive field, the anchors of different scales are defined on different feature maps, and we adopt the proportional sampling method to ensure that the sampling density of the anchors on different feature maps is consistent. By replacing the face by frame, we have achieved very good results on the DeepFaceLab task, achieving relatively small source and destination loss, respectively, with the real-time speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.