In this paper, we propose a method to analyze the differences in the motions of gaze and hands between skill levels in assembly tasks. The method quantizes the positional information of the gaze and hands into eighteen areas and converts the positional information of into a code. Next, it calculates the occurrence frequency of pairs of codes. Then, it generates co-occurrence histograms, called “Gaze/motion integration features,” using the occurrence frequency. We can analyze the relationship of motions between the gaze and hands using these features. The results of an analysis of a skill improvement process are that the non-dominant hand of an “elementary level” stays in two areas, and the non-dominant hand of an “intermediate level” moves to five areas. Therefore, humans can move their non-dominant hand more efficiently at the “intermediate level” than at the “elementary level.” In addition, we found that the gaze of the “intermediate level” moves to eight areas, and the gaze of the “expert level” moves to three areas. Therefore, we found that the gaze of the “expert level” remained at the center of the workbench.
In this paper, we propose using image recognition techniques to estimate the “understanding measure” in person-toperson teaching situations. The phrase “understanding measure” refers to how strongly a teacher feels a student understands a topic. First, we extract a student’s nonverbal behavior (head movement, gazes, and blinking) as the features for the estimation process. Next, we calculate the subspace from the aforementioned feature by using principal component analysis (PCA) and linear discriminant analysis (LDA). Finally, we classify unknown data as either “understood” or “did not understand” by using a kNN classifier in subspace. Our experiments confirmed that the Fmeasure of the classification “understood” by our method was 0.75 and “did not understand” was 0.60, indicating that our method improved F-measures 0.38 and 0.11, respectively, compared with previous methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.