This paper presents a video-based camera tracker that combines marker-based and feature point-based cues within a particle filter framework. The framework relies on their complementary performances. On the one hand, marker-based trackers can robustly recover camera position and orientation when a reference (marker) is available but fail once the reference becomes unavailable. On the other hand, filter-based camera trackers using feature point cues can still provide predicted estimates given the previous state. However, the trackers tend to drift and usually fail to recover when the reference reappears. Therefore, we propose a fusion where the estimate of the filter is updated from the individual measurements of each cue. The particularity of the fusion filter is to manipulate different sorts of cues in a single framework. The framework keeps a single motion model and its prediction is corrected by one cue at a time. More precisely, the marker-based cue is selected when the reference is available whereas the feature point-based cue is selected otherwise. The filter's state is updated by switching between two different likelihood distributions. Each likelihood distribution is adapted to the type of measurement (cue). Evaluations on real cases show that the fusion of these two approaches outperforms the individual tracking results.
Replica detection is a prerequisite for the discovery of copyright infringement and detection of illicit content. For this purpose, content-based systems can be an efficient alternative to watermarking. Rather than imperceptibly embedding a signal, content-based systems rely on content similarity concepts. Certain content-based systems use adaptive classifiers to detect replicas. In such systems, a suspected content is tested against every original, which can become computationally prohibitive as the number of original contents grows. In this paper, we propose an image detection approach which hierarchically estimates the partition of the image space where the replicas (of an original) lie by means of R-trees. Experimental results show that the proposed system achieves high performance. For instance, a fraction of 0.99975 of the test images are filtered by the system when the test images are unrelated to any of the originals while only a fraction of 0.02 of the test images are rejected when the test image is a replica of one of the originals.
In this paper, we propose a technique for image replica detection. By replica, we mean equivalent versions of a given reference image, e.g. after it has undergone operations such as compression, filtering or resizing. Applications of this technique include discovery of copyright infringement or detection of illicit content. The technique is based on the extraction of multiple features from an image, namely texture, color, and spatial distribution of colors. Similar features are then grouped into groups and the similarity between two images is given by several partial distances. The decision function to decide whether a test image is a replica of a given reference image is finally derived using Support Vector Classifier (SVC). In this paper, we show that this technique achieves good results on a large database of images. For instance, for a false negative rate of 5 % the system yields a false positive rate of only 6 • 10-5.
This paper presents a fingerprinting method based on equivalence classes. An equivalence class is composed of a reference image and all its variations (or replicas). For each reference image, a decision function is built. The latter determines if a given image belongs to its corresponding equivalence class. This function is built in three steps: synthesis, projection, and analysis. In the first step, the reference image is replicated using different image
operators (like JPEG compression, average filtering, etc). During the projection step, the replicas are projected onto a distance space. In the final step, the distance space is analyzed, using machine learning algorithms, and the decision function is built. In this study, three machine learning approaches are compared: orthotope, support vectors machine (SVM), and support vectors data description (SVDD). The orthotope is a computationally efficient ad-hoc method. It consists in building a generalized rectangle in the distance space. The SVM and SVDD are two more general learning algorithms.
KEYWORDS: 3D modeling, Optical spheres, Distortion, Steganography, Digital watermarking, Data hiding, Data modeling, Venus, Chemical species, 3D displays
This paper proposes a method to embed information into a 3D model represented by a polygonal mesh. The approach used consists in slightly changing the position of the vertices, influencing the length of approximation of the normals to the surface. This technique exhibits relatively low complexity, and offers robustness to simple geometric tranformations. In addition, it does not introduce any visible distortion to the original model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.