Attention is closely related to human life. To detect attention states quickly and accurately with fewer resources, this research proposes a method for attention state detection, it is based on differential entropy (DE) and power spectral density (PSD). Electroencephalogram (EEG) data is from 15 participants. It was processed using the Fast Fourier Transform (FFT) to extract DE and PSD features, which was normalized. These features were input into a Support Vector Machine (SVM). After optimizing the model parameters, it achieved a well-performing attention state detection model. The proposed method achieved a maximum classification accuracy of 85% and an average accuracy of 67%, The model described in the statement surpasses traditional SVM models that are trained solely on DE or PSD features, as well as single-channel or multi-channel SVM models. The new method can be used to learn additional features for attention verification and generalizes well for the task of developing a robust deep learning system.
KEYWORDS: Gesture recognition, Data modeling, Feature extraction, Education and training, Performance modeling, Machine learning, Convolution, Deep learning, Signal attenuation, Sensors
The Wi-Fi-based wireless sensing technology has emerged as a research hotspot in the field of perception in recent years, enabling intelligent sensing of human activities and the surrounding environment. However, existing wireless sensing models exhibit a high number of parameters, making real-time perception challenging, especially in scenarios with limited computational resources such as mobile devices. To address this issue, this paper proposes a classification recognition algorithm based on a hybrid approach that combines Spatially Separable Convolution (SSC) and Stacked Gate Recurrent Unit (SGRU). In the shallow layers of the hybrid model, the algorithm utilizes a feature extraction module composed of spatially separable convolution to capture spatial features of human gestures while maintaining the temporal consistency of features. In the deeper layers, SGRU network is employed to learn the spatiotemporal features of gestures. The SGRU consists of two layers, where the output of the first layer serves as the input to the second layer. Through validation on the open-source Widar dataset of human gestures, this paper demonstrates that the proposed SSC-SGRU, when compared to the comprehensive ResNet18 with a reduction of approximately 11.0M parameters, achieves an accuracy improvement of approximately 7.2%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.