The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer
Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial
expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have
focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture
recognition framework, which combines the different groups of features such as facial expression features and
hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes
of human gestures with facial expression including neutral, negative and positive meanings from American Sign
Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level,
an early feature combination can be performed by concatenating and weighting different feature groups, and
LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression
space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in
a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three
to seven recording sessions and conducted experiments with the combination techniques. Experimental results
showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature
level fusion.
Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between
humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving
camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for
extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing
factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last
research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we
propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further
improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be
stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either
one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate
the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and
American Sign Language gestures obtained from 5 people. Our experimental recognition results show better
performance is obtained by PCA based approach than the Condensation algorithm based method.
To achieve environments in which humans and mobile robots co-exist, technologies for recognizing hand gestures from
the video sequence acquired by a dynamic camera could be useful for human-to-robot interface systems. Most of
conventional hand gesture technologies deal with only still camera images. This paper proposes a very simple and stable
method for extracting hand motion trajectories based on the Human-Following Local Coordinate System (HFLC
System), which is obtained from the located human face and both hands. Then, we apply Condensation Algorithm to the
extracted hand trajectories so that the hand motion is recognized. We demonstrate the effectiveness of the proposed
method by conducting experiments on 35 kinds of sign language based hand gestures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.