KEYWORDS: Computer programming, Data compression, Switching, Video coding, Video, Video compression, Quantization, Forward error correction, Hyperspectral imaging, Error control coding
Some emerging applications may require flexible playback features for time-based media, such as video, that cannot be directly supported by current compression standards, because for these decoding of frames can only be done in a predetermined order. An example would be a video application where both backward and forward frame-by-frame playback are to be supported. A standard codec could support this by decoding complete GOPs in the desired order, and then playing back one frame at a time. Thus, potentially significant added delay and memory are needed to support backward playback, which can be lowered if small GOP sizes are chosen, at the cost of reduced coding efficiency. Other example applications where flexible playback may be desirable include switching between different views in multiview video coding, and accessing individual spectral bands in hyperspectral imagery. In this work we address flexible playback by showing that it becomes feasible when a particular data unit (e.g., a video frame) can be decoded using information from either one of a number of other data units (e.g., in the video case the next frame or the previous frame). Note that this is different from structures such as bi-directionally predicted frames, which require both predictor frames to be available at the decoder. We cast this problem as one of source coding with uncertainty about decoder side-information and propose a solution based on distributed source coding. In addition, we propose macroblock-based mode switching algorithms in the context of distributed video coding to improve coding efficiency. Our results show that, using forward/backward playback as an example, our proposed solution can achieve good coding efficiency without incurring additional delay and memory overhead.
KEYWORDS: Distortion, Video, Electroluminescence, Data modeling, Computer programming, Receivers, Signal to noise ratio, Video coding, Switches, Algorithm development
We present a general rate-distortion based scheduling framework that can accommodate cases where multiple encoded versions for the same video are available for transmission. Previous work on video scheduling is mostly focused on those encoding techniques, such as layered coding, which generate only one set of dependent packets. However, it is sometimes preferred to have a codec that produces redundant video data, where multiple different decoding paths are possible. Examples of these scenarios are multiple description layered coding and multiple independently encoded video streams. A
new source model called Directed Acyclic HyperGraph (DAHG) is introduced to describe the relationship between different video data units with multiple decoding paths. Based on this model, we propose two low-complexity scheduling algorithms: the greedy algorithm and the M-T algorithm. Experiments are made to compare the performance of these algorithms. It is shown that, in the case of multiple decoding paths, the M-T algorithm outperforms the greedy algorithm by taking into account some of the transmission possibilities available in the near future before making a decision.
KEYWORDS: Distortion, Video, Electroluminescence, Data modeling, Telecommunications, Receivers, Video coding, Computer programming, Scalable video coding, Internet
Layered coding (LC) and multiple description coding (MDC) have been
proposed as two different kinds of 'quality adaptation' schemes for video delivery over the current Internet or wireless networks. To combine the advantages of LC and MDC, we present a new approach -- Multiple Description Layered Coding (MDLC), to provide reliable video communication over a wider range of network scenarios and application requirements. MDLC improves LC in that it introduces redundancy in each layer so that the chance of receiving at least one description of base layer is greatly enhanced. Though LC and MDC are each good in limit cases (e.g., long end-to-end delay for LC vs. short delay for MDC), the proposed MDLC system can address intermediate cases as well. Same as a LC system with retransmission, the MDLC system can have a feedback channel to indicate which descriptions have been
correctly received. Thus a low redundancy MDLC system can be
implemented with our proposed runtime packet scheduling system
based on the feedback information. The goal of our scheduling
algorithm is to find a proper on-line packet scheduling policy to
maximize the playback quality at the decoder. Previous work on
scheduling algorithms has not considered multiple decoding choices
due to the redundancy between data units, because of the increase
in complexity involved in considering alternate decoding paths. In
this paper, we introduce a new model of Directed Acyclic HyperGraph (DAHG) to represent the data dependencies among frames and layers, as well as the data correlation between descriptions. The impact of each data unit to others is represented by messages passing along the graph with updates based on new information received. Experimental results show that the proposed system provides more robust and efficient video communication for real-time applications over lossy packet networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.