Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.
H.264/MPEG-4 AVC video coding achieves high coding performance by complex inter and intra prediction algorithms. Many fast intra prediction algorithms are proposed to reduce the calculated modes by using hierarchical directional selection or spatial and temporal correlations. If all-zero blocks can be detected before DCT and quantization, the DCT and quantization can be skipped for these blocks. The computation complexity can be reduced further. Early detection algorithms of all-zero blocks for H.264 video coding have been developed in inter motion search. However, these methods cannot be used in intra prediction directly. In this paper, a novel all-zero block detection algorithm for H.264 intra prediction is proposed. In intra 4x4 modes, the SAD calculation is the same as that in inter motion search, but it is much complex in intra 16x16 modes. The Hadamard transform is used for SAD calculation of 16 AC-coefficient blocks, and an extra Hadamard transform is applied for DC block derived from them. The DC and AC thresholds are derived separately according to SAD calculations. The simulation results of all I-frame coding show that the proposed method can save up to 40% computation time in intra prediction with nearly no PSNR loss and less bitrate gain. Furthermore, it is less performance loss in the condition of I-frame and P-frame coding with the same time saving of intra prediction. Our proposed algorithm has no conflict with other fast intra prediction algorithms and can be applied on any other fast intra prediction method to achieve extra computational saving.
The Workshop of Metasynthetic Engineering is a human-computer system for analyzing the strategy and method of collectivity layout and implement of complex huge system. Virtual conferencing space (VCS) can provide the seamless virtual collaboration work environment including eye contacts and gaze awareness, which matches the need of user-level application environment of Workshop of Metasynthetic Engineering. Based on virtual conference space, we implement a collaborative environment, which supports group cooperation and integration seminar. In this paper, we introduce the conception of virtual conferencing space, discuss the model and implementation of VCS, and then introduce the solution of how to support CSCW in VCS-based Workshop of Metasynthetic Engineering.
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.