Regular Articles

Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video

[+] Author Affiliations
Amin Banitalebi-Dehkordi

University of British Columbia, Electrical and Computer Engineering Department, Vancouver, BC V6T 1Z4, Canada

Eleni Nasiopoulos

University of British Columbia, Department of Psychology, Vancouver, BC V6T 1Z4, Canada

Mahsa T. Pourazad

University of British Columbia, Institute for Computing, Information, and Cognitive Systems, Vancouver, BC V6T 1Z4, Canada

TELUS Communications Inc., Vancouver, BC V6B 8N9, Canada

Panos Nasiopoulos

University of British Columbia, Electrical and Computer Engineering Department, Vancouver, BC V6T 1Z4, Canada

University of British Columbia, Institute for Computing, Information, and Cognitive Systems, Vancouver, BC V6T 1Z4, Canada

J. Electron. Imaging. 25(1), 013008 (Jan 14, 2016). doi:10.1117/1.JEI.25.1.013008
History: Received March 1, 2015; Accepted December 4, 2015
Text Size: A A A

Abstract.  Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is well explored for two-dimensional (2-D) image and video content, there have been only a few attempts made to design three-dimensional (3-D) saliency prediction models. Newly proposed 3-D VAMs have to be validated over large-scale video saliency prediction datasets, which also contain results of eye-tracking information. There are several publicly available eye-tracking datasets for 2-D image and video content. In the case of 3-D, however, there is still a need for large-scale video saliency datasets for the research community for validating different 3-D VAMs. We introduce a large-scale dataset containing eye-tracking data collected from 61 stereoscopic 3-D videos (and also 2-D versions of those), and 24 subjects participated in a free-viewing test. We evaluate the performance of the existing saliency detection methods over the proposed dataset. In addition, we created an online benchmark for validating the performance of the existing 2-D and 3-D VAMs and facilitating the addition of new VAMs to the benchmark. Our benchmark currently contains 50 different VAMs.

Figures in this Article
© 2016 SPIE and IS&T

Topics

Eye ; Stereoscopy ; Video

Citation

Amin Banitalebi-Dehkordi ; Eleni Nasiopoulos ; Mahsa T. Pourazad and Panos Nasiopoulos
"Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video", J. Electron. Imaging. 25(1), 013008 (Jan 14, 2016). ; http://dx.doi.org/10.1117/1.JEI.25.1.013008


Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.