Paper
3 February 2014 Incorporating visual attention models into video quality metrics
Welington Y. L. Akamine, Mylène C. Q. Farias
Author Affiliations +
Proceedings Volume 9016, Image Quality and System Performance XI; 90160O (2014) https://doi.org/10.1117/12.2039780
Event: IS&T/SPIE Electronic Imaging, 2014, San Francisco, California, United States
Abstract
A recent development in the area of image and video quality consists of trying to incorporate aspects of visual attention in the design of visual quality metrics, mostly using the assumption that visual distortions appearing in less salient areas might be less visible and, therefore, less annoying. This research area is still in its infancy and results obtained by different groups are not yet conclusive. Among the works that have reported some improvement, most use subjective saliency maps, i.e. saliency maps generated from eye-tracking data obtained experimentally. Besides, most works address the image quality problem, not focusing on how to incorporate visual attention into video signals. In this work, we investigate the benefits of incorporating saliency maps obtained with visual attention. In particular, we compare the performance of four full-reference video quality metrics with their modified versions, which had saliency maps incorporated to the algorithm. For comparison proposes, we have used a database of subjective salience maps.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Welington Y. L. Akamine and Mylène C. Q. Farias "Incorporating visual attention models into video quality metrics", Proc. SPIE 9016, Image Quality and System Performance XI, 90160O (3 February 2014); https://doi.org/10.1117/12.2039780
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Visualization

Image quality

Visual process modeling

Databases

Feature extraction

Motion models

RELATED CONTENT


Back to Top