Open Access
1 October 2009 High-precision synchronization of video cameras using a single binary light source
Qi Zhao, Yan Qiu Chen
Author Affiliations +
Abstract
Camera synchronization is necessary for multicamera applications. We propose a simple and yet effective approach termed random on-off light source (ROOLS) to synchronize video sequences. It uses a single light source such as an LED to generate a random binary valued signal that is captured by the video cameras. The captured binary-valued sequences are then matched and the temporal offset of the cameras is computed up to subframe interval precision. We test the proposed method on synchronizing video sequences captured under a variety of illumination conditions and the results are verified against the ground truth provided by an LED array clock. The main contribution of the proposed method is that it reliably achieves high-precision synchronization at a low cost of only adding a simple light source. In addition, it is suited for synchronization in both laboratory and outdoor environments.

1.

Introduction

For multicamera systems, synchronization is a must to provide accurate temporal correlation for incorporating image information from multiple viewpoints.

Synchronicity can be achieved through real-time hardware synchronization1 or by establishing a time relationship between sequences recorded by unsynchronized video cameras.2 While ensuring high precision synchronization, hardware solutions are costly and complex.

In a scenario where synchronous video sequences provided by hardware are not feasible, it is still possible to obtain synchronicity using image features.3, 4, 5 These feature-based methods depend on the existence of salient and robust features in the scene. The failure of such features to exist in the scene and the error of detecting, tracking, and matching them would lead to incorrect synchronization.

In this paper, we present a simple and yet effective method, termed random on-off light source (ROOLS), to recover the temporal offset at subframe accuracy. It utilizes an auxiliary light source such as an LED to provide temporal cues. Compared to special-purpose hardware approaches, our method is far less complex and is inexpensive. Compared to feature-based approaches, ROOLS is more robust since it is completely independent of scene properties.

2.

Problem Statement

Without loss of generality, we consider the case of two video cameras. Let the time instances of the video frames taken by the α ’th camera be denoted by

Eq. 1

Tα=(t1α,t2α,,tNαα),α{1,2},NαN,
where Nα denotes the length of the α ’th sequence, and tkα denotes the time of the k ’th frame in the α ’th sequence. Note that T1 and T2 are measured by a common clock.

In a typical situation, identical video cameras of constant frame interval ΔT are used, when

Eq. 2

Tα=(t1α,t1α+ΔT,,t1α+(Nα1)×ΔT).
Synchronizing two video sequences in such situation is equivalent to measuring the temporal offset between their initial frames

Eq. 3

Tdiff=t11t12.

3.

Proposed Method

3.1.

Formulation

We propose to use a single temporally coded light source such as an LED as the signal to be captured by the cameras for synchronization. The light signal is essentially a time-continuous binary-valued function denoted as

Eq. 4

f:R{0,1}.
It is sampled at Tα by the α ’th camera, producing a time-discrete binary-valued sequence

Eq. 5

[fcamα(n)]n=1Nα=[f(tnα)]n=1Nα={f[(n1)ΔT+t1α]}n=1Nα.
Since f is binary valued, it can be characterized by time instants where its function value rises from 0 to 1 or drops from 1 to 0. We term each of these instants a transition event. Let Φ denote a subsequence of all transition events in f ,

Eq. 6

Φ=(ϕ1,ϕ2,,ϕN).
For each ϕk , we have

Eq. 7

ϵ>0,f(ϕkϵ)f(ϕk+ϵ)=1,
where ⊕ denotes the exclusive or operator, and ϵ is an arbitrarily small real positive number. For the α ’th camera, let Φα denote the transition events corresponding to Φ . Obviously, as part of Tα , Φα can be expressed as follows:

Eq. 8

Φα=[tIα(1)α,tIα(2)α,,tIα(N)α],
where Iα is a subsequence of (1,2,,Nα) . For each Iα(k) , it satisfies

Eq. 9

fcamα[Iα(k)]fcamα[Iα(k)1]=1.
This notation and their relationship are illustrated in Fig. 1 .

Fig. 1

Achieving subframe-interval estimation: (1) the transition events of f , fcam1 , and fcam2 are highlighted by black dots; (2) black vertical bars denote the mean position of corresponding transition events in f , fcam1 , and fcam2 .

040501_1_1.jpg

3.2.

Achieving Sub-Frame-Interval Precision

Given Φ and Φα , consider the difference between a pair of corresponding transition events:

Eq. 10

δkα=ΦkαΦk=tIα(k)αϕk.
Provided that δkα is an independent and identically distributed (i.i.d.) random variable uniformly distributed over [0,ΔT] with mean and variance being μ=ΔT2 and σ2=ΔT212 . The averaged difference δα¯=1Nk=1Nδkα is a random variable with mean and variance being μ̃=μ and σ̃2=(1N)σ2 . However,

Eq. 11

δα¯=1Nk=1N[tIα(k)αϕk]=tIα¯ϕ¯,
where tIα¯ and ϕ¯ are the mean positions of transition events with respect to fcamα and f , as illustrated by Fig. 1. According to central limit theorem,6 the averaged sum of a sufficiently large number of i.i.d. random variables each with finite mean and variance approximates normal distribution. Hence, δα¯ takes a normal distribution with mean of μ and variance of ΔT212N . From Eq. 11 we obtain

Eq. 12

tI11¯tI22¯=δ1¯δ2¯.
Since tIαα¯=(1N)k=1N{t1α+[Iα(k)1]} , Eq. 12 can be further written as

Eq. 13

040501_1_d1.jpg
where Tdiff=TdiffΔT , δ=(δ1¯δ2¯)ΔTN(0,16N) . When N is sufficiently large, the variance of δ will be negligibly small, leading to high precision estimation of Tdiff .

3.3.

Transition Detection Accuracy

In real-world applications, the binary sequence is obtained through quantifying the image intensity of the light source by certain threshold τ . For samples crossing transition events, the quantified binary value might flip, causing the transition event to shift one frame backward. If we take a sample right before the edge where signal rises from 0 to 1 for instance, its intensity is close to 1 and incorrectly quantified to 1. Equation 13 tells us that the shift will introduce additional error to the estimation. Suppose the light source intensity is normalized and τ=0.5 is chosen as the threshold so that the probabilities for transitions to flip from 0 to 1 and 1 to 0 are identical. Let xiα denote a single shift event in a video sequence α . Its probability density function (pdf) is p(xiα=1)=p(xiα=0)=0.5 , with expectation μxiα=0.5 and variance σxiα=0.25 . Let xα¯=(1N)i=1Nxiα denote averaged transition shift. Because xiα are i.i.d. random variables, once again by the use of central limit theorem, we have xα¯N(0.5,14N) . According to Eq. 13, the extra error δs introduced by transition shift in two video sequences turns out to be x1¯x2¯ . It can be proved that δs takes a normal distribution with mean of 0 and variance of 12N . Counting in δs , the total error of temporal offset estimation is

Eq. 14

δ=δs+δ~N(0,11.5N).

3.4.

Random Binary Sequence Design

The proposed method requires δkα=tIα(k)αϕk to be i.i.d. random variables uniformly distributed in [0,ΔT] . To achieve this, we set the transition time

Eq. 15

ϕk=ϕk1+χ,
where ϕk1 is the time of the previous transition and χ is uniformly distributed in [ιΔT,(ι+κ)ΔT]ι,κN . Transition time generated in this way can be proved to ensure δkα meeting the requirement.

3.5.

Transition Matching

The estimation of the temporal offset requires the transition events of the two cameras be matched. We refer to this process as transition matching. Let the segment

between two consecutive transition events be denoted as Dα(k) :

Eq. 16

Dα(k)={fcamα[Iα(k+1)]fcamα[Iα(k)]}×[Iα(k+1)Iα(k)].
The binary sequence [fcamα(n)]n=1Nα can be equivalently represented by a sequence of transition segments. Let λ(i,j)=|D1(i)D2(j)| denote the difference between two segments. Transition matching can be equivalently achieved by matching two sequences of transition segment. Based on the observation that the difference of corresponding segments would be small, optimal transition matching can be obtained by solving the following formula:

Eq. 17

argminl,i,j(γ=0l1λ(i+γ,j+γ)max[γ=0l1|D1(i+γ)|,γ=0l1|D2(j+γ)|]+exp{γ=0l1|D1(i+γ)|+γ=0l1|D2(j+γ)|2max[k=1M11|D1(k)|,k=1M21|D2(k)|]}),
where l denotes the number of overlapping segments, and M1 and M2 are the lengths of the two segment sequences. The first term assesses the similarity of two overlapping segment sequences. Howerver, considering only the first term possibly leads to erroneous matching due to short overlapping length l . To avoid this, the second term is introduced to give a large penalty for small l . The optimal solution is sought by evaluating all combinations of i and j .

4.

Experiments

4.1.

Hardware and Configuration

The experiment system is made up of two Sony HVR-V1 high-definition (HD) video cameras, an LED array clock providing the ground truth, and a single temporally encoded LED light source. The video cameras operate at 200 frames per second. The values of ι and κ in Eq. 15 are set to 2 and 6, which ensures there are at least two frames between adjacent transition events and avoids ambiguity in transition matching. We selected τ=0.5 to quantize the image intensity of the LED a binary value.

4.2.

Experiment Results

We conducted three groups of experiments under illumination conditions including daylight, fluorescent lighting, and darkness. The results are shown in Table 1 . In all the tests, only 200 transition events were used. The average estimation error was about 0.08 frame intervals. We observe that all estimation errors are less than 0.2 frame intervals. This would be explained later.

Table 1

Results of 10 experiments.

Illum.EstimatedGround TruthError
Daylight 3.410 3.2653 0.1447
Daylight0.61500.60000.0150
Daylight0.56000.61220.0522
Fluorescent 0.4900 0.6122 0.1222
Fluorescent0.110000.1100
Darkness 2.9200 2.9605 0.0405
Darkness1.11501.14750.0325
Darkness 2.8850 2.7961 0.0889
Darkness1.19001.31580.1258
Darkness 2.7300 2.6273 0.1027

4.3.

Comprison with Other Methods

The proposed method was compared with the feature-based approaches3, 4, 5 and the results are summarized in Table 2 . The comparison indicates ROOLS achieves higher estimation accuracy than existing approaches.

Table 2

Average temporal offset error of various approaches

Method[3]Method[4]Method[5]ROOLS
Averageerror0.110.20.08

4.4.

Analysis and Discussions

The property of normal distribution states that 3 standard deviations from the mean account for about 99.7% of the distribution. When N=200 transitions are used, according to Eq. 14, the standard deviation is about 0.05. The estimation error is bounded in 3×0.05=0.15 frame intervals. This explains why the estimation error in Table 1 are bounded in 0.2 frame intervals. The performance of the proposed method can be improved by increasing N .

5.

Conclusion

We presented an innovative approach toward synchronizing commercial video cameras. It achieved high-precision synchronization at the low cost of adding only a simple temporally coded light source. The proposed method requiring the video cameras to have identical frame rates is not a serious limitation since using identical video cameras for one task is convenient and typical.

Acknowledgments

The research work presented in this paper is supported by National Natural Science Foundation of China, Grant No. 60875024.

references

1. 

T. Kanade, H. Saito, and S. Vedula, The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams, (1998) Google Scholar

2. 

A. Whitehead, R. Laganiere, and P. Bose, “Temporal synchronization of video sequences in theory and in practice,” 132 –137 (2005). Google Scholar

3. 

Y. Caspi and M. Irani, “Spatio-temporal alignment of sequences,” IEEE Trans. Pattern Anal. Mach. Intell., 24 1409 –1424 (2002). https://doi.org/10.1109/TPAMI.2002.1046148 Google Scholar

4. 

C. Rao, A. Gritai, M. Shah, and T. Syeda-Mahmood, “View-invariant alignment and matching of video sequences,” 939 –945 (2003). Google Scholar

5. 

S. N. Sinha and M. Pollefeys, “Synchronization and calibration of camera networks from silhouettes,” 116 –119 (2004). Google Scholar

6. 

W. Feller, An Introduction to Probability Theory and Its Applications, 2 Wiley, New York (1971). Google Scholar
©(2009) Society of Photo-Optical Instrumentation Engineers (SPIE)
Qi Zhao and Yan Qiu Chen "High-precision synchronization of video cameras using a single binary light source," Journal of Electronic Imaging 18(4), 040501 (1 October 2009). https://doi.org/10.1117/1.3247860
Published: 1 October 2009
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications and 3 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Video

Light sources

Binary data

Content addressable memory

Light emitting diodes

Error analysis

Back to Top