Like fingerprints, facial information and other biological characteristics, human voice also carries the physiological characteristics of living things. It is unique, stable personal information that cannot be stolen or lost. The speaker's voiceprint can contain changes that remain unchanged, so these features make the voiceprint features deeper, more elusive, and more difficult to forge, making the authentication stronger and safer. Based on the basic characteristics of human voiceprints, this paper designs a voiceprint recognition system based on neural networks. In order to enable the network to better use original data to obtain output results, time and frequency domain masking methods are used for data enhancement. In the network part, the encoder-decoder method uses the transformer architecture to achieve end-to-end data processing, and uses the triplet loss function to evaluate and optimize the parameters within the neural network to improve the prediction accuracy of the model. Modeling experiments were conducted on the LibriSpeech and CN-Celeb datasets, respectively. The system realizes human voiceprint recognition end-to-end based on deep learning and has been tested to meet the design needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.