JEI Letters

High-precision synchronization of video cameras using a single binary light source

[+] Author Affiliations
Qi Zhao

Fudan University, School of Information Science and Engineering, Shanghai, China

Yan Qiu Chen

Fudan University, School of Computer Science, Shanghai, China

J. Electron. Imaging. 18(4), 040501 (October 20, 2009). doi:10.1117/1.3247860
History: Received April 03, 2009; Revised August 25, 2009; Accepted August 26, 2009; Published October 20, 2009
Text Size: A A A

Open Access Open Access

Camera synchronization is necessary for multicamera applications. We propose a simple and yet effective approach termed random on-off light source (ROOLS) to synchronize video sequences. It uses a single light source such as an LED to generate a random binary valued signal that is captured by the video cameras. The captured binary-valued sequences are then matched and the temporal offset of the cameras is computed up to subframe interval precision. We test the proposed method on synchronizing video sequences captured under a variety of illumination conditions and the results are verified against the ground truth provided by an LED array clock. The main contribution of the proposed method is that it reliably achieves high-precision synchronization at a low cost of only adding a simple light source. In addition, it is suited for synchronization in both laboratory and outdoor environments.

Figures in this Article

For multicamera systems, synchronization is a must to provide accurate temporal correlation for incorporating image information from multiple viewpoints.

Synchronicity can be achieved through real-time hardware synchronization1 or by establishing a time relationship between sequences recorded by unsynchronized video cameras.2 While ensuring high precision synchronization, hardware solutions are costly and complex.

In a scenario where synchronous video sequences provided by hardware are not feasible, it is still possible to obtain synchronicity using image features.35 These feature-based methods depend on the existence of salient and robust features in the scene. The failure of such features to exist in the scene and the error of detecting, tracking, and matching them would lead to incorrect synchronization.

In this paper, we present a simple and yet effective method, termed random on-off light source (ROOLS), to recover the temporal offset at subframe accuracy. It utilizes an auxiliary light source such as an LED to provide temporal cues. Compared to special-purpose hardware approaches, our method is far less complex and is inexpensive. Compared to feature-based approaches, ROOLS is more robust since it is completely independent of scene properties.

Without loss of generality, we consider the case of two video cameras. Let the time instances of the video frames taken by the α’th camera be denoted byDisplay Formula

1Tα=(t1α,t2α,,tNαα),α{1,2},NαN,
where Nα denotes the length of the α’th sequence, and tkα denotes the time of the k’th frame in the α’th sequence. Note that T1 and T2 are measured by a common clock.

In a typical situation, identical video cameras of constant frame interval ΔT are used, whenDisplay Formula

2Tα=(t1α,t1α+ΔT,,t1α+(Nα1)×ΔT).
Synchronizing two video sequences in such situation is equivalent to measuring the temporal offset between their initial framesDisplay Formula
3Tdiff=t11t12.

Formulation

We propose to use a single temporally coded light source such as an LED as the signal to be captured by the cameras for synchronization. The light signal is essentially a time-continuous binary-valued function denoted asDisplay Formula

4f:R{0,1}.
It is sampled at Tα by the α’th camera, producing a time-discrete binary-valued sequenceDisplay Formula
5[fcamα(n)]n=1Nα=[f(tnα)]n=1Nα={f[(n1)ΔT+t1α]}n=1Nα.
Since f is binary valued, it can be characterized by time instants where its function value rises from 0 to 1 or drops from 1 to 0. We term each of these instants a transition event. Let Φ denote a subsequence of all transition events in f,Display Formula
6Φ=(ϕ1,ϕ2,,ϕN).
For each ϕk, we haveDisplay Formula
7ϵ>0,f(ϕkϵ)f(ϕk+ϵ)=1,
where ⊕ denotes the exclusive or operator, and ϵ is an arbitrarily small real positive number. For the α’th camera, let Φα denote the transition events corresponding to Φ. Obviously, as part of Tα, Φα can be expressed as follows:Display Formula
8Φα=[tIα(1)α,tIα(2)α,,tIα(N)α],
where Iα is a subsequence of (1,2,,Nα). For each Iα(k), it satisfiesDisplay Formula
9fcamα[Iα(k)]fcamα[Iα(k)1]=1.
This notation and their relationship are illustrated in Fig. 1.

Grahic Jump LocationF1 :

Achieving subframe-interval estimation: (1) the transition events of f, fcam1, and fcam2 are highlighted by black dots; (2) black vertical bars denote the mean position of corresponding transition events in f, fcam1, and fcam2.

Achieving Sub-Frame-Interval Precision

Given Φ and Φα, consider the difference between a pair of corresponding transition events:Display Formula

10δkα=ΦkαΦk=tIα(k)αϕk.
Provided that δkα is an independent and identically distributed (i.i.d.) random variable uniformly distributed over [0,ΔT] with mean and variance being μ=ΔT2 and σ2=ΔT212. The averaged difference δα¯=1Nk=1Nδkα is a random variable with mean and variance being μ̃=μ and σ̃2=(1N)σ2. However,Display Formula
11δα¯=1Nk=1N[tIα(k)αϕk]=tIα¯ϕ¯,
where tIα¯ and ϕ¯ are the mean positions of transition events with respect to fcamα and f, as illustrated by Fig. 1. According to central limit theorem,6 the averaged sum of a sufficiently large number of i.i.d. random variables each with finite mean and variance approximates normal distribution. Hence, δα¯ takes a normal distribution with mean of μ and variance of ΔT212N. From Eq. 11 we obtainDisplay Formula
12tI11¯tI22¯=δ1¯δ2¯.
Since tIαα¯=(1N)k=1N{t1α+[Iα(k)1]}, Eq. 12 can be further written as Display Formulawhere Tdiff=TdiffΔT, δ=(δ1¯δ2¯)ΔTN(0,16N). When N is sufficiently large, the variance of δ will be negligibly small, leading to high precision estimation of Tdiff.

Transition Detection Accuracy

In real-world applications, the binary sequence is obtained through quantifying the image intensity of the light source by certain threshold τ. For samples crossing transition events, the quantified binary value might flip, causing the transition event to shift one frame backward. If we take a sample right before the edge where signal rises from 0 to 1 for instance, its intensity is close to 1 and incorrectly quantified to 1. Equation 13 tells us that the shift will introduce additional error to the estimation. Suppose the light source intensity is normalized and τ=0.5 is chosen as the threshold so that the probabilities for transitions to flip from 0 to 1 and 1 to 0 are identical. Let xiα denote a single shift event in a video sequence α. Its probability density function (pdf) is p(xiα=1)=p(xiα=0)=0.5, with expectation μxiα=0.5 and variance σxiα=0.25. Let xα¯=(1N)i=1Nxiα denote averaged transition shift. Because xiα are i.i.d. random variables, once again by the use of central limit theorem, we have xα¯N(0.5,14N). According to Eq. 13, the extra error δs introduced by transition shift in two video sequences turns out to be x1¯x2¯. It can be proved that δs takes a normal distribution with mean of 0 and variance of 12N. Counting in δs, the total error of temporal offset estimation isDisplay Formula

14δ=δs+δ~N(0,11.5N).

Random Binary Sequence Design

The proposed method requires δkα=tIα(k)αϕk to be i.i.d. random variables uniformly distributed in [0,ΔT]. To achieve this, we set the transition timeDisplay Formula

15ϕk=ϕk1+χ,
where ϕk1 is the time of the previous transition and χ is uniformly distributed in [ιΔT,(ι+κ)ΔT]ι,κN. Transition time generated in this way can be proved to ensure δkα meeting the requirement.

Transition Matching

The estimation of the temporal offset requires the transition events of the two cameras be matched. We refer to this process as transition matching. Let the segment

between two consecutive transition events be denoted as Dα(k):Display Formula

16Dα(k)={fcamα[Iα(k+1)]fcamα[Iα(k)]}×[Iα(k+1)Iα(k)].
The binary sequence [fcamα(n)]n=1Nα can be equivalently represented by a sequence of transition segments. Let λ(i,j)=|D1(i)D2(j)| denote the difference between two segments. Transition matching can be equivalently achieved by matching two sequences of transition segment. Based on the observation that the difference of corresponding segments would be small, optimal transition matching can be obtained by solving the following formula:Display Formula
17argminl,i,j(γ=0l1λ(i+γ,j+γ)max[γ=0l1|D1(i+γ)|,γ=0l1|D2(j+γ)|]+exp{γ=0l1|D1(i+γ)|+γ=0l1|D2(j+γ)|2max[k=1M11|D1(k)|,k=1M21|D2(k)|]}),
where l denotes the number of overlapping segments, and M1 and M2 are the lengths of the two segment sequences. The first term assesses the similarity of two overlapping segment sequences. Howerver, considering only the first term possibly leads to erroneous matching due to short overlapping length l. To avoid this, the second term is introduced to give a large penalty for small l. The optimal solution is sought by evaluating all combinations of i and j.

Hardware and Configuration

The experiment system is made up of two Sony HVR-V1 high-definition (HD) video cameras, an LED array clock providing the ground truth, and a single temporally encoded LED light source. The video cameras operate at 200 frames per second. The values of ι and κ in Eq. 15 are set to 2 and 6, which ensures there are at least two frames between adjacent transition events and avoids ambiguity in transition matching. We selected τ=0.5 to quantize the image intensity of the LED a binary value.

Experiment Results

We conducted three groups of experiments under illumination conditions including daylight, fluorescent lighting, and darkness. The results are shown in Table 1. In all the tests, only 200 transition events were used. The average estimation error was about 0.08 frame intervals. We observe that all estimation errors are less than 0.2 frame intervals. This would be explained later.

Table Grahic Jump Location
Results of 10 experiments.
Comprison with Other Methods

The proposed method was compared with the feature-based approaches35 and the results are summarized in Table 2. The comparison indicates ROOLS achieves higher estimation accuracy than existing approaches.

Table Grahic Jump Location
Average temporal offset error of various approaches
Analysis and Discussions

The property of normal distribution states that 3 standard deviations from the mean account for about 99.7% of the distribution. When N=200 transitions are used, according to Eq. 14, the standard deviation is about 0.05. The estimation error is bounded in 3×0.05=0.15 frame intervals. This explains why the estimation error in Table 1 are bounded in 0.2 frame intervals. The performance of the proposed method can be improved by increasing N.

We presented an innovative approach toward synchronizing commercial video cameras. It achieved high-precision synchronization at the low cost of adding only a simple temporally coded light source. The proposed method requiring the video cameras to have identical frame rates is not a serious limitation since using identical video cameras for one task is convenient and typical.

The research work presented in this paper is supported by National Natural Science Foundation of China, Grant No. 60875024.

Kanade  T., , Saito  H., , and Vedula  S.,  The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams. , The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA ((1998)).
Whitehead  A., , Laganiere  R., , and Bose  P., “ Temporal synchronization of video sequences in theory and in practice. ,” in  Proc. IEEE Workshop on Motion and Video Computing. , Vol. 2, , pp. 132–137  ((2005)).
Caspi  Y., and Irani  M., “ Spatio-temporal alignment of sequences. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 24, , 1409–1424  ((2002)).
Rao  C., , Gritai  A., , Shah  M., , and Syeda-Mahmood  T., “ View-invariant alignment and matching of video sequences. ,” in  Proc. 9th IEEE Int. Conf. on Computer Vision. , pp. 939–945  ((2003)).
Sinha  S. N., and Pollefeys  M., “ Synchronization and calibration of camera networks from silhouettes. ,” in  Proc. 17th Int. Conf. on Pattern Recognition ICPR 2004. , Vol. 1, , pp. 116–119 ,  IEEE Computer Society ,  Washington, D.C.  ((2004)).
Feller  W.,  An Introduction to Probability Theory and Its Applications. , Vol. 2, ,  Wiley ,  New York , ((1971)).
© 2009 SPIE and IS&T

Citation

Qi Zhao and Yan Qiu Chen
"High-precision synchronization of video cameras using a single binary light source", J. Electron. Imaging. 18(4), 040501 (October 20, 2009). ; http://dx.doi.org/10.1117/1.3247860


Figures

Grahic Jump LocationF1 :

Achieving subframe-interval estimation: (1) the transition events of f, fcam1, and fcam2 are highlighted by black dots; (2) black vertical bars denote the mean position of corresponding transition events in f, fcam1, and fcam2.

Tables

Table Grahic Jump Location
Results of 10 experiments.
Table Grahic Jump Location
Average temporal offset error of various approaches

References

Kanade  T., , Saito  H., , and Vedula  S.,  The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams. , The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA ((1998)).
Whitehead  A., , Laganiere  R., , and Bose  P., “ Temporal synchronization of video sequences in theory and in practice. ,” in  Proc. IEEE Workshop on Motion and Video Computing. , Vol. 2, , pp. 132–137  ((2005)).
Caspi  Y., and Irani  M., “ Spatio-temporal alignment of sequences. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 24, , 1409–1424  ((2002)).
Rao  C., , Gritai  A., , Shah  M., , and Syeda-Mahmood  T., “ View-invariant alignment and matching of video sequences. ,” in  Proc. 9th IEEE Int. Conf. on Computer Vision. , pp. 939–945  ((2003)).
Sinha  S. N., and Pollefeys  M., “ Synchronization and calibration of camera networks from silhouettes. ,” in  Proc. 17th Int. Conf. on Pattern Recognition ICPR 2004. , Vol. 1, , pp. 116–119 ,  IEEE Computer Society ,  Washington, D.C.  ((2004)).
Feller  W.,  An Introduction to Probability Theory and Its Applications. , Vol. 2, ,  Wiley ,  New York , ((1971)).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement


  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.