Internet Video Coding (IVC) has been developed in MPEG by combining well-known existing technology elements and
new coding tools with royalty-free declarations. In June 2015, IVC project was approved as ISO/IEC 14496-33 (MPEG-
4 Internet Video Coding). It is believed that this standard can be highly beneficial for video services in the Internet
domain. This paper evaluates the objective and subjective performances of IVC by comparing it against Web Video
Coding (WVC), Video Coding for Browsers (VCB) and AVC High Profile. Experimental results show that IVC’s
compression performance is approximately equal to that of the AVC High Profile for typical operational settings, both
for streaming and low-delay applications, and is better than WVC and VCB.
Stereo matching is a fundamental topic in computer vision. Usually, stereo matching is mainly composed of four stages: cost computation, cost aggregation, disparity optimization and disparity refinement. In this paper, we propose a novel stereo matching method with space-constrained cost aggregation and segmentation-based disparity refinement. Stateof- the-art methods are used for cost aggregation and disparity optimization stages. Three technical contributions are given in this paper. First, applying space-constrained cross-region in cost aggregation stage; second, utilizing both color and disparity information in image segmentation; third, using image segmentation and occlusion region detection to aid disparity refinement. The performance of our platform ranks second in the Middlebury evaluation.
In this paper, an innovative method of HEVC video pre-processing is proposed. The method applies a simple linear iterative clustering (SLIC), which adapts a k-means clustering to group pixels into perceptually meaningful atomic regions of superpixels. By calculating the average of weighted average of luminance differences around each pixel in the superpixel, a suitable parameter of Gaussian filter for the superpixel is determined. Experimental results show that bit rate can be reduced up to 29% without loss in visual quality.
3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the
stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web,
creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction.
In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our
model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as
scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying
principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D
browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source
WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page,
our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to
be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated
in a truly 3D space.
KEYWORDS: Visualization, Stereoscopic displays, 3D displays, Eye, Cameras, Algorithm development, Video, Matrices, 3D image processing, Chemical elements
Web technology provides a relatively easy way to generate contents for us to recognize the world, and with the
development of stereoscopic display technology, the stereoscopic devices will become much more popular. The
combination of web technology and stereoscopic display technology will bring revolutionary visual effect. The
Stereoscopic 3D (S3D) web pages, in which text, image and video may have different depth, can be displayed on
stereoscopic display devices. This paper presents the approach about how to render two viewing S3D web pages
including text, images, widgets: first, an algorithm should be developed in order to display stereoscopic elements like
text, widgets by using 2D graphic library; second, a method should be presented to render stereoscopic web page based
on current framework of the browser; third, a rough solution is invented to fix the problem that comes out in the method.
Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used
to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes.
As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing
development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this
circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display
of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and
implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to
support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is
described as well. The modification of WebKit is also discussed, which is made to support the generation of both left
view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google
Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and
thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.
KEYWORDS: Quantization, Video coding, Computer programming, Computer simulations, Video, Detection and tracking algorithms, Matrices, Visual information processing, Electronic imaging, Current controlled current source
The High Efficiency Video Coding has a significant compression performance benefit versus previous standards. Thanks to the high efficiency prediction tools, blocks with all-zero quantized transform coefficients are quite common in HEVC. The computation load of transform and quantization can be remarkably reduced if the all-zero blocks can be detected prior to transform and quantization. Based on the theoretical analysis of the integer transform and quantization process in HEVC, we propose some SAD thresholds under which all-zero block can be detected. Simulation results show that with our proposed method, nearly 37% time saving for computation time of transform and quantization can be saved.
KEYWORDS: Motion estimation, Video coding, Computer programming, Internet, Distortion, Visual information processing, Electronic imaging, Current controlled current source, Image processing, Basic research
In conventional motion compensation, prediction block is related only with one motion vector for P frame. Multihypothesis motion compensation (MHMC) is proposed to improve the prediction performance of conventional motion compensation. However, multiple motion vectors have to be searched and coded for MHMC. In this paper, we propose a new low-cost multi-hypothesis motion compensation (LMHMC) scheme. In LMHMC, a block can be predicted from multiple-hypothesis with only one motion vector to be searched and coded into bit-stream, other motion vectors are predicted from motion vectors of neighboring blocks, and so both the encoding complexity and bit-rate of MHMC can be saved by our proposed LMHMC. By adding LMHMC as an additional mode in MPEG internet video coding (IVC) platform, the B-D rate saving is up to 10%, and the average B-D rate saving is close to 5% in Low Delay configure. We also compare the performance between MHMC and LMHMC in IVC platform, the performance of MHMC is improved about 2% on average by LMHMC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.