Paper
8 June 2024 Research on netizen sentiment recognition based on multimodal deep learning
Nan Jia, Tianhao Yao
Author Affiliations +
Proceedings Volume 13171, Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024); 1317124 (2024) https://doi.org/10.1117/12.3032110
Event: 3rd International Conference on Algorithms, Microchips and Network Applications (AMNA 2024), 2024, Jinan, China
Abstract
With the rapid popularity of social media and the Internet, network security issues are becoming increasingly prominent. More and more people are accustomed to expressing their emotions and opinions online, and the expression of netizens’ emotions is becoming more and more diversified. Accurate analysis of netizens’ emotions is particularly important. Traditional emotion recognition methods are mainly based on text analysis, but with the diversification of network media, single text analysis has been unable to meet the actual needs. Therefore, continuously exploring the application of multimodal deep learning in netizen emotion recognition has become an inevitable choice for public security organs. This paper aims to explore the application of multimodal deep learning in netizen emotion recognition research. Therefore, this study uses multimodal datasets of text and images, and constructs BERT and VGG-16(fine-tuning) models to extract emotional features from text mode and image mode respectively. By introducing the multi-head attention mechanism, the two modes are combined to establish a fusion model, and explores how to combine them to improve classification performance. The final accuracy of text modality is 0.70, the accuracy of image modality is 0.58, and the accuracy of multimodal fusion model is 0.73, which is 0.03 and 0.15 higher than that of text modality and image modality, respectively, proving the scientific nature of multimodal fusion model. It can provide new ideas and methods for the analysis and early warning of public security organs, and also provide reference and inspiration for the research in other fields.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Nan Jia and Tianhao Yao "Research on netizen sentiment recognition based on multimodal deep learning", Proc. SPIE 13171, Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317124 (8 June 2024); https://doi.org/10.1117/12.3032110
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Emotion

Data modeling

Deep learning

Image fusion

Feature fusion

Neural networks

Machine learning

Back to Top