For now, even the state-of-the-art of retinal prostheses is limited by low resolution and image processing algorithms. In order to find suitable parameters for prosthetic vision and help visual prosthesis wearers get on well with low-resolution gray-scale image correctly and effectively, the action recognition experiments based on virtual simulation were performed in this paper. Animated videos of skeletonized walking upright were structured in 3ds Max, then the action animation video clips were pixelized by MATLAB. The actions were classified into three categories: combined actions, simple actions and difficult actions. All action clips have 6 resolutions (6×16, 24×24, 32×32, 48×48, 64×64, 128×128). Twenty observers (classified by gender and experience) were recruited to participate in the test. They were asked to recognize actions at different resolutions. The results showed that there was no significant regular pattern of gender difference on the recognition accuracy. As for experience difference, the recognition accuracy of experienced observers was higher than the unexperienced ones. The conclusion was drawing that learning experience can improve recognition accuracy, experienced observers recognized an action requires lower resolution, 48×48 was a suitable resolution which has considerable latent capacity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.