Presentation + Paper
31 May 2022 Multi antenna radar system for American Sign Language (ASL) recognition using deep learning
Author Affiliations +
Abstract
This paper investigates RF-based system for automatic American Sign Language (ASL) recognition. We consider radar for ASL by joint spatio-temporal preprocessing of radar returns using time frequency (TF) analysis and high-resolution receive beamforming. The additional degrees of freedom offered by joint temporal and spatial processing using a multiple antenna sensor can help to recognize ASL conversation between two or more individuals. This is performed by applying beamforming to collect spatial images in an attempt to resolve individuals communicating at the same time through hand and arm movements. The spatio-temporal images are fused and classified by a convolutional neural network (CNN) which is capable of discerning signs performed by different individuals even when the beamformer is unable to separate the respective signs completely. The focus group comprises individuals with varying expertise with sign language, and real time measurements at 77 GHz frequency are performed using Texas Instruments (TI) cascade radar.
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Gavin MacLaughlin, Jack Malcolm, and Syed A. Hamza "Multi antenna radar system for American Sign Language (ASL) recognition using deep learning", Proc. SPIE 12097, Big Data IV: Learning, Analytics, and Applications, 120970N (31 May 2022); https://doi.org/10.1117/12.2618718
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Radar

Phased arrays

Antennas

Neural networks

Convolutional neural networks

Receivers

Sensors

RELATED CONTENT


Back to Top