Regular Articles

Adaptive search range adjustment and multiframe selection algorithm for motion estimation in H.264/AVC

[+] Author Affiliations
Yingzhe Liu

Microelectronics Center, Harbin Institute of Technology, Harbin 150001, China

Jinxiang Wang

Microelectronics Center, Harbin Institute of Technology, Harbin 150001, China

Fangfa Fu

Microelectronics Center, Harbin Institute of Technology, Harbin 150001, China

J. Electron. Imaging. 22(2), 023031 (Jun 21, 2013). doi:10.1117/1.JEI.22.2.023031
History: Received February 6, 2013; Revised May 4, 2013; Accepted May 23, 2013
Text Size: A A A

Open Access Open Access

Abstract.  The H.264/AVC video standard adopts a fixed search range (SR) and fixed reference frame (RF) for motion estimation. These fixed settings result in a heavy computational load in the video encoder. We propose a dynamic SR and multiframe selection algorithm to improve the computational efficiency of motion estimation. By exploiting the relationship between the predicted motion vector and the SR size, we develop an adaptive SR adjustment algorithm. We also design a RF selection scheme based on the correlation between the different block sizes of the macroblock. Experimental results show that our algorithm can significantly reduce the computational complexity of motion estimation compared with the JM15.1 reference software, with a negligible decrease in peak signal-to-noise ratio and a slight increase in bit rate. Our algorithm also outperforms existing methods in terms of its low complexity and high coding quality.

Figures in this Article

Motion estimation (ME) is used in many video coding standards, such as MPEG-1, MPEG-4, and H.264/AVC, to remove interframe redundancies. There are two main factors that determine the performance of ME. One is the size of the search range (SR), and the other is the number of reference frames (RFs). The values of these two factors are set manually in the configuration file of the encoder and are used as fixed parameters in the whole ME process. Although larger SR and RF values enable better coding performance, they also impose a heavy encoding complexity. In the overall coding system, about 50% of the computation time and 90% of the memory access are dedicated to ME.1,2 Indeed, the memory access bandwidth required to read reference pixels from external memory has gradually become the bottleneck of real-time applications.

Because the coding video differs considerably, fixed SR and RF settings are not suitable for all situations. For example, if the video sequence is motionless, a small SR and single RF are sufficient to determine the best motion vector (MV). On the other hand, if the sequence contains a high degree of movement, we can adaptively adjust the above settings to achieve better video quality. In such cases, dynamic SR adjustment and multiframe selection algorithms have become important in improving coding efficiency and satisfying the needs of real-time applications.

Many algorithms that adaptively adjust the SR size and select the RFs have been proposed. For instance, the influence of the SR on the compression rate and coding quality has been examined in an extensive series of experiments.3 The maximum-likelihood estimator has been used to model the probability density function of the motion vector difference (MVD), with the optimal SR then calculated for a given hitting probability.4,5 However, the integral operation and likelihood estimator make the algorithm somewhat complicated. In 6, images are divided into motionless and moving regions, and different SR selection methods are given for each. Using the spatial correlation, the SR can be set as a function of the maximum of the neighboring matching errors.7 Similarly, the block can be classified as motionless, slow, or violent based on the magnitude of the predicted motion vector (PMV), allowing a 3×3, 7×7, or 9×9 search area to be selected accordingly.8,9 In 10, three different SRs are selected based on certain thresholds. However, with only a few fixed SRs and thresholds, the adaptability of these algorithms810 is insufficient to cope with the diversity of video sequences. In 11, the correlation between different block partitions is used to predict the SR size. The SR of the current block has been calculated based on the relationship between the probability density function of MVD and the MV distribution of the neighboring blocks.12 An analysis of the relationship between the compression ratio and the size of the SR has been shown to allow a smaller SR for macroblocks (MBs) with smaller compression ratios.13 In 14, a linear SR model is defined based on the 1-bit image plane.

Existing multiframe selection algorithms are generally classified into three categories. In the first, the MV composition technique is used to reduce the calculation load of multiframe ME, which uses the MVs from the previous RFs to generate those for the current RF. The activity dominant vector selection criterion can be used to obtain the composited MV.15 The MVs of two successive frames can synthesize the MV of the current RF,16 with only a small range around the composited MV needed for ME. In 17, the MVs with the same characteristics are first merged and then the two most relevant MVs are selected to generate the MV of the current block. To apply the MV composition algorithm, the MVs of different block types must be stored for further calculation. Therefore, this approach inevitably involves a large additional memory space.

In the second category, the candidate reference frames (CRFs) are predicted from the temporal and spatial neighbors. The priorities of all RFs can be computed from the spatial and temporal correlation of the RF index and MVs,18 or the coding modes and RFs from some reference region may be used to determine the CRFs for the current block.19 In 20, the initial RF is dynamically selected using the mean of the RF indexes of spatially neighboring MBs.

The third category speeds up the multiframe selection procedure by employing an early termination strategy. For instance, based on the spatial and temporal consistencies of the motion field, multiframe ME may only be selected for a small region of the MBs,21 or the experimentally obtained sum of absolute difference (SAD) curve could be used to skip the third and fourth RFs.22 In 23, the relationship between the SAD and the all-zero quantized transform coefficients was deduced. If the SAD is less than some threshold, the multiframe ME procedure should be terminated immediately. Alternatively, the MVs and SADs from previous RFs can be used to automatically confirm whether to search the last two RFs.24

Different from the above-mentioned methods, we propose a different dynamic SR and multiframe selection algorithm for the H.264/AVC ME process. For the SR adjustment scheme, we first analyze the relationship between the PMV and the SR and then use this relationship to adjust the SR size. For the multiframe selection scheme, we focus on the correlation between different block sizes of the MB to adaptively select RFs. Simulation results show that the proposed algorithm achieves a higher speedup while maintaining almost the same video quality and total bit rate.

The rest of this paper is organized as follows. Sections 2 and 3 describe the algorithms for dynamic SR adjustment and multiframe selection, respectively. Our simulation results are presented in Sec. 4, and we give our conclusions in Sec. 5.

Motivation and Analysis

The SR size can be successfully reduced using the algorithms mentioned in Sec. 1, but the efficiency can be further improved. In the H.264/AVC coding standard, the PMV decides the center of a search area. Therefore, the SR size depends on the location of the PMV more than any other factors, such as the maximum matching error7 or the motion activity.8 The relationship between the SR and the PMV is illustrated in Fig. 1. The vector mv0 denotes the search center and is calculated as the median operator over the neighboring MVs, and mvn is the final MV of block N obtained by ME. The vector mvdn is the difference between mv0 and mvn. Intuitively, when PMV (mv0) is accurate (i.e., when mv0 is close to mvn), fewer regions need to be searched before the final motion vector (mvn) is obtained. Hence, we can consider an adaptive SR adjustment algorithm which depends on the PMV.

Graphic Jump LocationF1 :

Relationship between the search range (SR) and PMV.

To determine an MV within the search window, SAD has been widely used to measure the prediction distortion between the current and the candidate blocks. For a block of size W×H SAD is defined as Display Formula

SAD[c,r(mv)]=i=m,j=nm+W,n+H|curr(i,j)ref(i+mvx,j+mvy)|(1)
where curr(i,j) denotes the pixel value of the current block and ref (i+mvx, j+mvy) is the pixel value of the candidate block for a given motion vector mv(mvx,mvy). In our scheme, we define the Dif metric to evaluate the accuracy of the PMV. In other words, we use Dif to measure the distance between the MV and its predictor. It is defined as follows: Display Formula
Dif=|SADI[c,r(mv0)]SADI[c,r(mvi)]|,(2)
where SADI[c,r(mv0)] and SADI[c,r(mvi)] are the SADs computed at the PMV (mv0) and the MV (mvi) of block I, respectively. A small Dif value implies an accurate predictor. Thus, we may expect a high probability of a small SR. To reveal the relationship between Dif and the SR size, the experimental results from a variety of video sequences are shown in Fig. 2. In this figure, |mvdn| represents the magnitude of the difference between MV and PMV, and average_Dif is the average Dif value of blocks with the same |mvdn|. A clear positive correlation can be seen, especially for |mvdn| values up to 11. Although average_Dif does not increase monotonously over the whole |mvdn| range, there is a high probability that average_Dif will become larger as |mvdn| increases. Hence, we can develop an adaptive SR selection algorithm, which uses Dif as the criterion for determining the SR. That is, the small Dif means the accurate PMV and the small SR value, whereas the large Dif means the large SR value.

Graphic Jump LocationF2 :

Relationship between Dif and |mvdn|. (Test conditions: encoded frame=100; number of reference frames(RFs)=1; quantization parameter (QP)=28; SR=16; group structure is IPP.)

For the current block N in Fig. 1, mvn is not available until the ME process has been performed across the whole search area, so we must set a predictor for SADN[c,r(mvn)] before it can be used. The influence of this predictor on coding quality and efficiency is briefly analyzed as follows. In Eq. (2), if a larger predictor is selected for SADN[c,r(mva)], a smaller Dif value will be obtained. According to Fig. 2, this should result in a smaller SR size. Although a higher coding speed can be achieved with a smaller SR, the coding quality may be lost if the true MV falls outside this smaller SR. In contrast, a smaller predictor gives a larger SR and a higher coding quality. Hence, our purpose is not to get an accurate estimate for SADN[c,r(mvn)] but to achieve a tradeoff between the coding quality and the coding efficiency. Therefore, we replace SADN[c,r(mvn)] with a smaller predictor: Display Formula

SADN[c,r(mvn)]=min{[SADA[c,r(mva)],SADB[c,r(mvb)],SADC[c,r(mvc)],SADE[c,r(mve)]}(3)
where SADA[c,r(mva)], SADB[c,r(mvb)], SADC[c,r(mvc)], and SADE[c,r(mve)] are the SADs computed from the MVs of the spatially neighboring blocks (A is the left block, B is the upper block, C is the upper-right block) and the colocated block (E).

To estimate the SR size of block N, let DifN, DifA, DifB, DifC, and DifE denote the Dif values of the current block N and its four neighbors (left, upper, upper-right, and colocated block), respectively. DifA, DifB, DifC, and DifE are subtracted from DifB, DifC, and DifN, and the four differences are collected in the set ΔDif: Display Formula

ΔDif={DifNDifi},i=A,B,C,andE.(4)

The SR is then adjusted according to the following conditions.

Condition 1: If any element in ΔDif is smaller than 0, DifN is smaller than the Dif value of (at least) one of its four neighbors. Thus, it can be determined that the PMV of block N is closer to its MV than its neighbor, and therefore the following smaller SR should be set for block N: Display Formula

SR=max[1,min(SRA,SRB,SRC,SRE)].(5)

In Eq. (5), SRA, SRB, SRC, and SRE are the SRs of blocks A, B, C, and E, respectively.

Condition 2: If all elements in ΔDif are greater than 0 and DifN>α×max(DifA,DifB,DifC,DifE), where α is an adjustment parameter, the PMV of the current block N is far from its MV, which is likely to be due to complex motion. In such cases, in order to guarantee the coding quality, a larger SR should be set as follows: Display Formula

SR=min[R,max(SRA,SRB,SRC,SRE)+R8],(6)
where R is a fixed SR given in the configuration file of the encoder. Based on extensive experiments, we set the value of α to 2.

Condition 3: If conditions 1 and 2 are not satisfied, the two minimum elements in ΔDif (corresponding to the blocks that have the closest Dif values to the current block N) are selected. These blocks are marked b1 and b2, and the SR of block N is set according to Display Formula

SR=min[R,max(SRb1,SRb2)+1],b1,b2{A,B,C,E}(7)

After determining the new SR according to Eqs. (5)–(7), the ME is performed within this new SR.

Proposed Algorithm

For clarity and completeness, we summarize the detailed description of the proposed algorithm as follows:

DifN is calculated for current block N according to Eqs. (2) and (3), and ΔDif is obtained according to Eq. (4). The following conditions are then used to adjust the size of SR.

  • If condition 1 [min(ΔDif)0] is satisfied, SR is set according to Eq. (5).
  • If condition 2 [min(ΔDif)0 and DifN>α×max(DifA,DifB,DifC,DifE)] is satisfied, SR is set according to Eq. (6).
  • If neither condition 1 nor condition 2 is satisfied, SR is set according to Eq. (7).

After finding the MV within the new SR defined by our algorithm, the Dif and SR values of the block N are saved for the next coding block.

In order to verify the correctness of the proposed algorithm, we analyzed the hit ratio to see whether our new SR included the result of ME. The hit ratios for different quantization parameters (QPs) are shown in Table 1. It can be seen that our algorithm gives a high accuracy of 93.28% to 99.61%. The Bus, Mobile, and Stefan sequences contain a high level of motion and thus obtain relatively low hit ratios. For the Akiyo, Paris, and Flower sequences, which contain slower motion, the proposed algorithm yields a hit ratio of around 99%.

Table Grahic Jump Location
Table 1Hit ratio of the motion vector (MV) within the new search range (SR) (test conditions: encoded frame=100; number of reference frames(RFs)=1; QP=28, 32, 36, 40; SR=16; group structure is IPP).
Motivation and Analysis

Figure 3 describes the relationship between multiframe and variable block size ME. As shown in Fig. 3, the multiframe ME process is performed on each block size of the MB. These blocks are different partitions of the same MB, so their best RFs are highly correlated. According to this feature, the best RFs of the upper divisions can be used as CRFs for the lower divisions. However, using this up-to-down method to predict the CRFs of the lower divisions has two shortcomings. First, if the MB contains the object boundary or the complex content, then the probability that the lower and upper divisions will share the same best RF will be small. Second, the lower partitions have more sub-blocks than the upper partitions. Therefore, this up-to-down prediction method is not very efficient. For example, the P8×8 division contains 36 sub-blocks in total. If the number of CRFs is 2, then the 36 sub-blocks will require 72 ME procedures, whereas for the 16×16 division, only two ME procedures are required. Therefore, reducing the number of CRFs in the lower divisions is critical for improving the efficiency of ME.

Graphic Jump LocationF3 :

Multiple RFs motion estimation (ME) with various block size.

In summary, the 8×8 block is used as the basic unit in this study to determine the best RFs for the other coding modes. We use the 8×8 block as it better reflects the motion and texture characteristics of the MB and improves the efficiency as much as possible. For convenience, the RFs are hereafter marked as ref0,ref1,ref2,ref3 in accordance with their distance from the current frame.

Due to the temporal consistency of a video sequence, it is highly probable that blocks in static or slow-moving regions will select ref0 as the optimal RF. In this work, the movement complexity (MC) of the MB is defined as Display Formula

MC=max(|MV0|,|MV1|,|MV2|,|MV3|),(8)
where MV0, MV1, MV2, and MV3 denote the MVs of four 8×8 blocks at ref0.

To inspect the relationship between movement complexity and ref0, various video sequences with different motion activities are used as the test material. Table 2 documents the average experimental results for different QPs. T_B is the number of blocks with the same movement complexity, and RF_ratio represents the percentage of blocks whose ultimate RF is ref0. For the static blocks (which MC is equal to 0), an average of 97.87% of them select ref0 as the best RF. For the News, Container, and Akiyo sequences, the percentage of blocks choosing ref0 among T_B is greater than 99%. Table 2 also shows that the RF_ratio decreases as the value of MC increases. For the Stefan sequence, in particular, when the MC is 3, only 68.32% of T_B select ref0 as their ultimate RF. Therefore, we can conclude that when the value of MC is less than some threshold Th (set to 2 in this paper), there is a very high probability that only ref0 will be used as the best RF. From this result, we conclude that the search across other RFs can be skipped.

Table Grahic Jump Location
Table 2Relationship between the movement complexity and ref0 (test conditions: encoded frame=60; number of RFs=5; QP=28, 32, 36, 40; SR=16; group structure is IPP).

If MC is larger than Th, it indicates that the motion is complex. Thus, we perform ME on the remaining RFs for each 8×8 block. The reference maps are then generated according to the correlation between the 8×8 block and the other blocks. Figure 4 shows the reference maps for other blocks, where refa, refb, refc, and refd denote the optimal RFs of the first, second, third, and fourth 8×8 block, respectively. The CRFs for each partition are described as follows:

  • The P16×16 mode is used to encode regions containing slow motion. Thus, if more than two RFs are contained in Fig. 4(a), the motion of the current block is complex and the P16×16 mode cannot be selected. In this case, only ref0 is selected as the best RF; otherwise all RFs contained in Fig. 4(a) are selected as CRFs for the 16×16 block.
  • For the 16×8 block, refa and refb are regarded as CRFs for the upper 16×8 division, whereas refc and refd are CRFs for the lower 16×8 division in Fig. 4(b).
  • For the 8×16 block, refa and refc are regarded as CRFs for the left 16×8 division, whereas refb and refd are CRFs for the right 16×8 division in Fig. 4(c).
  • For the three 8×8 subdivisions, the best RF for the 8×8 block is directly used as the best RF for its subdivisions in Fig. 4(d).

Graphic Jump LocationF4 :

Reference maps of different partitions.

Proposed Algorithm

Based on the above description, the pseudocode for our multiframe selection algorithm can be written as shown in Fig. 5. The initial number of CRFs is usually set to 5.

Graphic Jump LocationF5 :

Pseudocode for our algorithm.

As shown in Fig. 5, ME is first performed on ref0 for the four 8×8 blocks, and the MVs of the four blocks are used to calculate the motion complexity according to Eq. (8). If MC is less than the threshold Th (set as 2), then ref0 is set as the CRF. Otherwise, ME is conducted on from ref1 to ref4 for the four 8×8 blocks, respectively. The next step is to determine a candidate reference frames set (CRFS) for other partitions based on the reference maps in Fig. 4. ME is then performed on the RFs in the CRFS for the 8×8 sub-block, 16×16 block, 16×8 block, and the 8×16 block. In the ME process for H.264/AVC, the coding order is P16×16, P16×8, P8×16, and then P8×8. As the proposed algorithm changes the coding order, i.e., the 8×8 block is processed first, the MVs of the second, third, and fourth 8×8 blocks must be repredicted after finishing the whole ME process, otherwise there will be some inconsistency between the decoder and encoder.

Using the pseudocode above, we investigate the number of RFs under the best- and worst-case scenarios. In the best case, only one RF (ref0) is required for all block divisions. In the worst case, all five RFs are required for the 8×8 blocks; each of the 16×16 block, 16×8 block, and 8×16 block have two CRFs. However, subpartitions such as 8×4, 4×8, and 4×4 will always have one RF. Therefore, the average number of RFs required for the entire coding of the MB is [(4×5)+(1×2)+(2×2)+(2×2)+(16×1)+(8×1)+(8×1)]/411.51.

To verify the effectiveness of the above method, the average hit ratio for different QPs at each block size was compared to the full RF search method. The results are shown in Table 3. As can be seen, our algorithm achieves a high average accuracy of 91% to 98% for all block partitions. For the 16×16 block size, the hit ratio averages above 95%. Even the 8×8 sub-block averages over 91%, with a worst-case performance of 84.75% with the Mobile sequence.

Table Grahic Jump Location
Table 3Average hit ratios of the proposed algorithm for each block size (test conditions are the same as Table 2).

To evaluate the performance of the proposed approach, it was implemented on the H.264/AVC reference software JM15.1. Various standard sequences were used, including CIF, 4CIF, and HD sizes, at 30 fps for 100 frames. The simulations were performed with rate distortion (RD) optimization and Hadamard transform enabled, QP=28, 32, 36, and 40, IPPP sequence types, CAVLC entropy coding, and one or five RFs. The default SR was set to 16 for CIF, 32 for 4CIF, and 64 for HD sequences.

For a numerical comparison, the Bjontegaard delta peak signal-to-noise ratio (BDPSNR) and Bjontegaard delta bit rate (BDBR) were used to evaluate the coding quality. These quantities are recommended for comparing the difference between two RD curves.25 A negative BDPSNR (dB) or positive BDBR (%) indicates a coding loss over the full-search algorithm in JM15.1. For an objective comparison, the average used reference frame (URF) and average computational complexity (CPX) are used to measure the complexity reduction. CPX (%)4 is the speed-up ratio of the SR adjustment algorithm, which is defined as the ratio of the number of search points in the proposed method to the number of search points in the full-search motion estimation (FME) algorithm. For example, a CPX value of 21% implies that the number of search points in the proposed method is 21% of the number in the FME algorithm. The average URF represents the average number of RFs used by each MB, with a smaller URF signifying fewer RFs.

For an objective comparison of coding performance, the proposed approach was compared with corresponding methods. Our SR adjustment method was compared to two recent schemes.4,11 It can be seen from Table 4 that the proposed algorithm reduced the number of search points by an average of 2.56%, albeit at the cost of a 0.14% increase in bit rate and a 0.06 dB reduction in peak signal-to-noise ratio (PSNR). When the hitting probability of MVDs was set to 0.7, the method of Ko et al.4 exhibited a good performance, with an average CPX of 4.94%. However, the implementation cost of this method is much higher than that of the proposed scheme due to the overheads inherent in the integral operation and likelihood estimation. For several sequences, such as Stefan, Station2_Hd, and Soccer_4Cif, the method of 11 exhibited better coding quality than our method in terms of BDPSNR and BDBR, but our approach can further deduct an average of 11% search points with only a slight bit rate increase and small PSNR decrease.

Table Grahic Jump Location
Table 4Performance comparison of the proposed algorithm on JM 15.1 (with one RF).

For further comparison, Table 5 depicts the ME time of the proposed method compared to the UMHexagonS algorithm in JM15.1. This shows that the proposed method reduces the ME time by 9.15% on average, with a marginal increase in BDBR (+0.07%) and a small loss of BDPSNR (0.01dB). For sequences with static and slow motion activity, such as Akiyo_Cif and Hall_Cif, UMHexagonS achieves a high speed-up factor, because the ME process is terminated after finishing the small and middle diamond pattern search. In contrast, our algorithm works well for CIF sequences with high motion levels and 4CIF and HD sequences with larger SR values.

Table Grahic Jump Location
Table 5Performance comparison with UMHexagonS algorithm (test conditions are the same as Table 4).

We compared our multiframe selection algorithm with two well-known efficient algorithms.19,23 The comparison results are listed in Table 6. For most sequences, the method of Lee et al.19 yielded a high bit rate increment and PSNR loss (average of 0.09 dB BDPSNR decrease and 0.61% BDBR increase), with 1.88 frames searched compared to the JM15.1 reference encoding. The method of 23 achieved good PSNR and bit rate performance (average 0.06 dB BDPSNR decrease and 0.45% BDBR increase) with a higher speed-up factor (1.52 RFs used). The proposed algorithm gave superior results to both of these techniques in terms of coding complexity reduction. It outperformed the method of Lee et al.19 in all aspects, with a 0.63-frame reduction in the number of searched RFs, which implies a reduction of 12% in the number of ME operations. At the same time, the proposed method improved the URF value by 0.31 frames compared to the method of Huang et al.,23 with a better bit rate and the same PSNR.

Table Grahic Jump Location
Table 6Performance comparison of the proposed algorithm on JM 15.1 (with five RFs).

It should be mentioned that the URF and CPX parameters are virtually independent of the programming quality or hardware platform. Therefore, it is clear that the performance of the proposed methods is superior to the corresponding methods evaluated above. It must also be noted that the proposed methods can be combined with any fast ME method, such as Fast Full search, EPZS, and UMHexagonS, in the JM reference encoder.

In this paper, we presented an efficient adaptive SR adjustment and multiframe selection algorithm for ME. The proposed SR adjustment algorithm adjusts the SR for ME based on the accuracy of PMV. We also designed an RF selection scheme with the aim of reducing the complexity of multiframe ME based on the relationship between the different block sizes of the MB. The described technique significantly reduces the CPX of ME in H.264/AVC. Through a comparative analysis, it was found that the proposed approach gave an average RF saving of 3.75 frames (1.25 URFs) with a setting of five RFs and a reduction of 97% in the number of search points with a setting of one RF, with a negligible bit rate increment and a minimal loss of image quality. These features will be important in the development of real-time coding applications for video conferencing and telephony. In future work, we will combine our two algorithms to further speed up the ME process.

He  Z. et al., “Power-rate-distortion analysis for wireless video communication under energy constraints,” IEEE Trans. Circuits Syst. Video Technol.. 15, (5 ), 645 –658 (2005). 1051-8215 CrossRef
Denolf  K. et al., “Initial memory complexity analysis of the AVC codec,” in  IEEE Workshop on Signal Processing Systems , pp. 222 –227,  IEEE Signal Processing Society ,  San Diego  (2002).
Gonzales  C. A., Yeo  H., Kuo  C. J., “Requirements for motion-estimation search range in MPEG-2 coded video,” IBM J. Res. Dev.. 43, (4 ), 453 –470 (1999). 0018-8646 CrossRef
Ko  Y.-H., Kang  H.-S., Lee  S.-W., “Adaptive search range motion estimation using neighboring motion vector differences,” IEEE Trans. Consum. Electron.. , 57, (2 ), 726 –730 (2011). 0098-3063 CrossRef
Kang  H.-S., Lee  S.-W., Hosseini  H. G., “Probability constrained search range determination for fast motion estimation,” J. Electron. Telecommun. Res. Inst.. 34, (3 ), 369 –378 (2012). 1225-6463 CrossRef
Kim  I., Kim  J., “Low-complexity block-based motion estimation algorithm using adaptive search range adjustment,” Opt. Eng.. 51, (6 ), 067010  (2012). 0091-3286 CrossRef
Lee  S., “Fast motion estimation based on adaptive search range adjustment and matching error prediction,” IET Image Process. 4, (1 ), 1 –10 (2010). 1751-9659 CrossRef
Ismail  Y. et al., “Fast variable padding motion estimation using smart zero motion prejudgment technique for pixel and frequency domains,” IEEE Trans. Circuits Syst. Video Technol.. 19, (5 ), 609 –626 (2009). 1051-8215 CrossRef
Ismail  Y. et al., “Fast motion estimation system using dynamic models for H.264/AVC video coding,” IEEE Trans. Circuits Syst. Video Technol.. 22, (1 ), 28 –42 (2012). 1051-8215 CrossRef
Lee  S.-W., Park  S.-M., Kang  H.-S., “Fast motion estimation with adaptive search range adjustment,” Opt. Eng.. 46, (4 ), 040504  (2007). 0091-3286 CrossRef
Lee  S., Choi  K., Jang  E. S., “Adaptive search range adjustment scheme for fast motion estimation in AVC/H.264,” Opt. Eng.. 50, (6 ), 067402  (2011). 0091-3286 CrossRef
Lou  C.-C., Lee  S.-W., Kuo  C.-C. J., “Adaptive motion search range prediction for video encoding,” IEEE Trans. Circuits Syst. Video Technol.. 20, (12 ), 1903 –1908 (2010). 1051-8215 CrossRef
Jung  J., Kim  J., Kyung  C.-M., “A dynamic search range algorithm for stabilized reduction of memory traffic in video encoder,” IEEE Trans. Circuits Syst. Video Technol.. 20, (7 ), 1041 –1046 (2010). 1051-8215 CrossRef
Urhan  O., “Constrained one-bit transform based fast block motion estimation using adaptive search range,” IEEE Trans. Consum. Electron.. 56, (3 ), 1868 –1871 (2010). 0098-3063 CrossRef
Chen  M.-J. et al., “Fast multiframe motion estimation algorithms by motion vector composition for the MPEG-4/AVC/H.264 standard,” IEEE Trans. Multimedia. 8, (3 ), 478 –487 (2006). 1520-9210 CrossRef
Kim  S.-E., Han  J.-K., Kim  J.-G., “An efficient scheme for motion estimation using multireference frames in H.264/AVC,” IEEE Trans. Multimedia. 8, (3 ), 457 –466 (2006). 1520-9210 CrossRef
Lee  T.-K., Chan  Y.-L., Fu  C.-H., “Reliable tracking algorithm for multiple reference frame motion estimation,” J. Electron. Imaging. 20, (3 ), 033003  (2011). 1017-9909 CrossRef
Jun  D. S., Park  H. W., “An efficient priority-based reference frame selection method for fast motion estimation in H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol.. 20, (8 ), 1156 –1161 (2010). 1051-8215 CrossRef
Lee  K., Jeon  G., Jeong  J., “Fast reference frame selection algorithm for H.264/AVC,” IEEE Trans. Consum. Electron.. 55, (2 ), 773 –779 (2009). 0098-3063 CrossRef
Park  I., Capson  D. W., “Improved motion estimation time using a combination of dynamic reference frame selection and residue-based mode decision,” Signal Image Video Process.. 6, (1 ), 25 –39 (2012). 1863-1703 CrossRef
Shen  L., Liu  Z., Zhang  Z., “Fast multiframe selection algorithm based on spatial-temporal characteristics of motion field,” J. Electron. Imaging. 17, (4 ), 043004  (2008). 1017-9909 CrossRef
Shen  L. et al., “An adaptive and fast multiframe selection algorithm for H.264 video coding,” IEEE Signal Process. Lett.. 14, (11 ), 836 –839 (2007). 1070-9908 CrossRef
Huang  Y.-W. et al., “Analysis and complexity reduction of multiple reference frames motion estimation in H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol.. 16, (4 ), 507 –522 (2006). 1051-8215 CrossRef
Li  X., Li  E. Q., Chen  Y.-K., “Fast multi-frame motion estimation algorithm with adaptive search strategies in H.264,” in  Proc. Int. Conf. Acoustics Speech Signal Process. , pp. 369 –372,  IEEE Signal Processing Society ,  Montreal, Canada  (2004).
Andersson  K., Sjöberg  R., Norkin  A., “Reliability Measure for BD Measurements,” ITU-T Q.6/SG16 VCEG 33th Meeting, Document VCEG-AL22, London, UK ( July 2009).

Grahic Jump LocationImage not available.

Yingzhe Liu received BS and MS degrees from YanShan University of China in 2002 and 2004, respectively. He is currently working toward a PhD degree at the Microelectronics Center of Harbin Institute of Technology, China. His current research interests include audio and video coding and digital hardware design for video compression.

Jinxiang Wang received his MS and PhD degrees in semiconductor physics and communication and information systems from Harbin Institute of Technology, China, in 1993 and 1999, respectively. He is now a professor in the School of Astronautics, Harbin Institute of Technology. His current research interests mainly include multimedia data compression, 3G wireless communication, systems on chips (SOC), and networks on chips (NOC).

Fangfa Fu received his MS and PhD degrees in semiconductor physics and communication and information systems from Harbin Institute of Technology, China, in 2006 and 2011, respectively. He is now a lecturer in the School of Astronautics, Harbin Institute of Technology. His research interests include systems on chips (SOC) and networks on chips (NOC).

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Yingzhe Liu ; Jinxiang Wang and Fangfa Fu
"Adaptive search range adjustment and multiframe selection algorithm for motion estimation in H.264/AVC", J. Electron. Imaging. 22(2), 023031 (Jun 21, 2013). ; http://dx.doi.org/10.1117/1.JEI.22.2.023031


Figures

Graphic Jump LocationF1 :

Relationship between the search range (SR) and PMV.

Graphic Jump LocationF2 :

Relationship between Dif and |mvdn|. (Test conditions: encoded frame=100; number of reference frames(RFs)=1; quantization parameter (QP)=28; SR=16; group structure is IPP.)

Graphic Jump LocationF5 :

Pseudocode for our algorithm.

Graphic Jump LocationF4 :

Reference maps of different partitions.

Graphic Jump LocationF3 :

Multiple RFs motion estimation (ME) with various block size.

Tables

Table Grahic Jump Location
Table 1Hit ratio of the motion vector (MV) within the new search range (SR) (test conditions: encoded frame=100; number of reference frames(RFs)=1; QP=28, 32, 36, 40; SR=16; group structure is IPP).
Table Grahic Jump Location
Table 2Relationship between the movement complexity and ref0 (test conditions: encoded frame=60; number of RFs=5; QP=28, 32, 36, 40; SR=16; group structure is IPP).
Table Grahic Jump Location
Table 3Average hit ratios of the proposed algorithm for each block size (test conditions are the same as Table 2).
Table Grahic Jump Location
Table 4Performance comparison of the proposed algorithm on JM 15.1 (with one RF).
Table Grahic Jump Location
Table 5Performance comparison with UMHexagonS algorithm (test conditions are the same as Table 4).
Table Grahic Jump Location
Table 6Performance comparison of the proposed algorithm on JM 15.1 (with five RFs).

References

He  Z. et al., “Power-rate-distortion analysis for wireless video communication under energy constraints,” IEEE Trans. Circuits Syst. Video Technol.. 15, (5 ), 645 –658 (2005). 1051-8215 CrossRef
Denolf  K. et al., “Initial memory complexity analysis of the AVC codec,” in  IEEE Workshop on Signal Processing Systems , pp. 222 –227,  IEEE Signal Processing Society ,  San Diego  (2002).
Gonzales  C. A., Yeo  H., Kuo  C. J., “Requirements for motion-estimation search range in MPEG-2 coded video,” IBM J. Res. Dev.. 43, (4 ), 453 –470 (1999). 0018-8646 CrossRef
Ko  Y.-H., Kang  H.-S., Lee  S.-W., “Adaptive search range motion estimation using neighboring motion vector differences,” IEEE Trans. Consum. Electron.. , 57, (2 ), 726 –730 (2011). 0098-3063 CrossRef
Kang  H.-S., Lee  S.-W., Hosseini  H. G., “Probability constrained search range determination for fast motion estimation,” J. Electron. Telecommun. Res. Inst.. 34, (3 ), 369 –378 (2012). 1225-6463 CrossRef
Kim  I., Kim  J., “Low-complexity block-based motion estimation algorithm using adaptive search range adjustment,” Opt. Eng.. 51, (6 ), 067010  (2012). 0091-3286 CrossRef
Lee  S., “Fast motion estimation based on adaptive search range adjustment and matching error prediction,” IET Image Process. 4, (1 ), 1 –10 (2010). 1751-9659 CrossRef
Ismail  Y. et al., “Fast variable padding motion estimation using smart zero motion prejudgment technique for pixel and frequency domains,” IEEE Trans. Circuits Syst. Video Technol.. 19, (5 ), 609 –626 (2009). 1051-8215 CrossRef
Ismail  Y. et al., “Fast motion estimation system using dynamic models for H.264/AVC video coding,” IEEE Trans. Circuits Syst. Video Technol.. 22, (1 ), 28 –42 (2012). 1051-8215 CrossRef
Lee  S.-W., Park  S.-M., Kang  H.-S., “Fast motion estimation with adaptive search range adjustment,” Opt. Eng.. 46, (4 ), 040504  (2007). 0091-3286 CrossRef
Lee  S., Choi  K., Jang  E. S., “Adaptive search range adjustment scheme for fast motion estimation in AVC/H.264,” Opt. Eng.. 50, (6 ), 067402  (2011). 0091-3286 CrossRef
Lou  C.-C., Lee  S.-W., Kuo  C.-C. J., “Adaptive motion search range prediction for video encoding,” IEEE Trans. Circuits Syst. Video Technol.. 20, (12 ), 1903 –1908 (2010). 1051-8215 CrossRef
Jung  J., Kim  J., Kyung  C.-M., “A dynamic search range algorithm for stabilized reduction of memory traffic in video encoder,” IEEE Trans. Circuits Syst. Video Technol.. 20, (7 ), 1041 –1046 (2010). 1051-8215 CrossRef
Urhan  O., “Constrained one-bit transform based fast block motion estimation using adaptive search range,” IEEE Trans. Consum. Electron.. 56, (3 ), 1868 –1871 (2010). 0098-3063 CrossRef
Chen  M.-J. et al., “Fast multiframe motion estimation algorithms by motion vector composition for the MPEG-4/AVC/H.264 standard,” IEEE Trans. Multimedia. 8, (3 ), 478 –487 (2006). 1520-9210 CrossRef
Kim  S.-E., Han  J.-K., Kim  J.-G., “An efficient scheme for motion estimation using multireference frames in H.264/AVC,” IEEE Trans. Multimedia. 8, (3 ), 457 –466 (2006). 1520-9210 CrossRef
Lee  T.-K., Chan  Y.-L., Fu  C.-H., “Reliable tracking algorithm for multiple reference frame motion estimation,” J. Electron. Imaging. 20, (3 ), 033003  (2011). 1017-9909 CrossRef
Jun  D. S., Park  H. W., “An efficient priority-based reference frame selection method for fast motion estimation in H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol.. 20, (8 ), 1156 –1161 (2010). 1051-8215 CrossRef
Lee  K., Jeon  G., Jeong  J., “Fast reference frame selection algorithm for H.264/AVC,” IEEE Trans. Consum. Electron.. 55, (2 ), 773 –779 (2009). 0098-3063 CrossRef
Park  I., Capson  D. W., “Improved motion estimation time using a combination of dynamic reference frame selection and residue-based mode decision,” Signal Image Video Process.. 6, (1 ), 25 –39 (2012). 1863-1703 CrossRef
Shen  L., Liu  Z., Zhang  Z., “Fast multiframe selection algorithm based on spatial-temporal characteristics of motion field,” J. Electron. Imaging. 17, (4 ), 043004  (2008). 1017-9909 CrossRef
Shen  L. et al., “An adaptive and fast multiframe selection algorithm for H.264 video coding,” IEEE Signal Process. Lett.. 14, (11 ), 836 –839 (2007). 1070-9908 CrossRef
Huang  Y.-W. et al., “Analysis and complexity reduction of multiple reference frames motion estimation in H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol.. 16, (4 ), 507 –522 (2006). 1051-8215 CrossRef
Li  X., Li  E. Q., Chen  Y.-K., “Fast multi-frame motion estimation algorithm with adaptive search strategies in H.264,” in  Proc. Int. Conf. Acoustics Speech Signal Process. , pp. 369 –372,  IEEE Signal Processing Society ,  Montreal, Canada  (2004).
Andersson  K., Sjöberg  R., Norkin  A., “Reliability Measure for BD Measurements,” ITU-T Q.6/SG16 VCEG 33th Meeting, Document VCEG-AL22, London, UK ( July 2009).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.