Counterfeiting digital images through a copy-move forgery is one of the most common ways of manipulating the semantic content of a picture, whereby a portion of the image is copy-pasted elsewhere into the same image. It could happen, however, instead of a digital image only its analog version may be available. Scanned or recaptured (by a digital camera) printed documents are widely used in a number of different scenarios, for example a photo published on a newspaper or a magazine. In this paper, the problem of detecting and localizing copy-move forgeries from a printed picture is focused. The copy-move manipulation is detected by verifying the presence of duplicated patches in the scanned image by using a SIFT-based method, tailored for printed image case. Printing and scanning/recapturing scenario is quite challenging because it involves different kinds of distortions. The goal is to experimentally investigate the requirement set under which reliable copy-move forgery detection is possible. We carry out a series of experiments, to pursue all the different issues involved in this application scenario by considering diverse kinds of print and re-acquisition circumstances. Experimental results point out that forgery detection is still successful though with reduced performances, as expected.
In this paper we present a new method for the detection of forgeries in digital videos, using the sensor's pattern noise. The
camera pattern noise is a unique stochastic high frequency characteristic of imaging sensors and the detection of a forged
frame in a video is determined by comparing the correlation between the noise within the frame itself and the reference
pattern noise with an empirical threshold. The reference pattern is created for the identification of the camera and the
authentication of the video too. Such a pattern is defined as self building because it is created from the video sequence
during the time develop, with a technique applied frame per frame, by averaging the noise extracted from each frame. The
method has been inherited from an existing system created by Fridrich et al.1 for still images. By using this method we
are able to identify if all the scenes of a video sequence have been taken with the same camera and if the number and/or
the content of the frames of the video have been modified. A large section of the paper is dedicated to the experimental
results, where we demonstrate that it is possible to perform a reliable identification even from video that has undergone
MPEG compression or frame interpolation.
In the last years digital watermarking has been widely indicated as a possible efficient tool to deal with multimedia copyright protection. Particularly, in the field of image watermarking, a large number of techniques has been developed aiming at hiding information within the data. After an initial phase in which the main problem was to succeed in inserting a private code which was unperceivable, successively watermarking community tried to design watermarks which presented a high level of robustness that meant the ability to be revealed also when the image had undergone different manipulations (e.g. compression, D/A-A/D conversion, filtering). Although good results have been obtained in this field, resilience against geometric manipulations, (e.g. rotation, scaling, changes of aspect ratio) is still a research issue; furthermore robustness against Stirmark random-displacements attack in which small local geometric modifications are carried out, is still a crucial point for the majority of the so-called robust algorithms. In this paper a new approach, based on the estimation through an optic flow technique
of the displacement field, correspondent to a specific geometric
transformation, is proposed. This methodology can be adopted as a re-synchronization tool, to enhance robustness of digital image watermarking techniques.
KEYWORDS: Data hiding, Error analysis, Video, Visualization, Computer programming, Video coding, Error control coding, Data processing, Reconstruction algorithms, Standards development
In this paper, a new data hiding-based error concealment algorithm
is proposed. The method allows to increase the video quality in
H.264/AVC wireless video transmission and Real-Time applications,
where the retransmission is unacceptable. Data hiding is used for
carrying to the decoder the values of 6 inner pixels of every
macro-block (MB) to be used to reconstruct lost MBs into Intra
frames through a bilinear interpolation process. The side
information concerning a slice is hidden into another slice of the
same frame, by properly modifying some quantized AC coefficients
of the Integer Transform of the 16 blocks 4x4 composing the MBs of
the host slice. At the decoder, the embedded information can be
recovered from the bit-stream and used in the bilinear interpolation to reconstruct the damaged slice. This method, although allowing the system to remain fully compliant with the standard, improves the performance with respect to the conventional error concealment methods adopted by H.264/AVC, from the point of view of visual quality and Y-PSNR. In particular, it is possible to improve the result of the interpolation process adopted by H.264/AVC, reducing the distance between interpolating pixels from 16 to 5.
In this paper attention is paid to the problem of remote-sensing image authentication by relying on a digital watermarking
approach. Both transmission and storage is usually cumbersome for remote-sensing imagery and compression is
unavoidable to make it feasible. In the case of multi-spectral images, used for classification purposes, too large valuemetric
changes can cause miss-classification errors, so near-lossless compression is used to grant a given peak compression
error. Similarly it is likely that also watermarking based authentication should grant such a maximum peak error and
the requirements to be satisfied are: to allow valuemetric authentication, to control the peak watermark embedding error,
to tolerate near-lossless compression. To achieve these requirements, a methodology has been designed by integrating
into the standard JPEG-LS compression algorithm, by means of a stripe approach, a known authentication technique
derived from Fridrich. This procedure points out two advantages: firstly, the produced bit-stream is perfectly compliant
with the JPEG-LS standard, secondly, when the image has been decoded, it is always authenticated because information
has been embedded in the reconstructed values. Near-lossless coding does not harm authentication procedure and robustness
against different attacks is preserved. JPEG-LS coding/decoding has not got worse from the point of view of
computational time.
KEYWORDS: Digital watermarking, Image processing, Calibration, Cultural heritage, CRTs, RGB color model, Digital imaging, Image quality, Image compression, Mathematical modeling
The goal of this paper is to present the research that has been carried out over the last 10 years in the Image Processing and Communications Lab of the University of Florence for developing applications for the cultural heritage field. In particular research has focused on the following issues: high resolution acquisition of paintings by means of mosaicing techniques, colour calibration of the acquisition devices, tools for forecasting the results of restoration processes (in particular with reference to the cleaning process), systems for producing virtually restored digital copies of paintings (in particular for filling in cracks and lacunas). The problems related to the distribution of the digital copies have also been considered, in particular with reference to the watermarking of
the images for copyright protection. The methodologies developed by the Lab with reference to the above mentioned issues will be described, and the main results discussed.
KEYWORDS: Video, Digital watermarking, Telecommunications, Video coding, Video compression, Video processing, Intellectual property, Electronics, Networks, Sensors
A client-server application for MPEG-4 video distribution, in a VOD (Video-On-Demand) infrastructure, has been built up granting authorized fruition by means of digital watermarking. Once the consumer has chosen to watch a program, his smart card code, plugged in the set top box, is sent back to the server side. This code will be embedded, exactly at that time, in the video sequence before streaming it towards the client through the network (the code will be obviously used for payment too). The client side, adequately equipped with the watermark detector, receives the video and checks it by extracting the identifying code, this one is matched with the code located in the end user smart card and if the comparison is right rendering is allowed, otherwise decoding is stopped and fruition is inhibited.
Watermarking algorithms for copyright protection are usually classified as belonging to one of two classes: detectable and readable. The aim of this paper is to present a possible approach for transforming an optimum, detectable technique previously proposed by the authors into a readable one. Similarly to what has been done previously by other authors we embed multiple copies of the watermark into the image, letting their relative positions in the frequency domain to be related to the informative message. The main drawback of this approach is that all copies of the watermark have to be detected without knowing their positions, i.e. all possible positions (many tenth thousands in our case) have to be tested, which is a prohibitive task from the point of view of the computational cost. Correlation based watermark detectors can overcome this problem by exploiting the Fast Fourier Transform (FFT) algorithm, but they are not optimum in the case of non additive watermarks. In this paper we demonstrate how the formula of the optimum watermark detector can be re-conducted to a correlation structure, thus allowing us to use the FFT for testing the watermark presence at all possible positions: in this way a fast optimum decoding system is obtained.
Median filtering, both scalar and vector, has been proposed in the literature as an effective tool to refine estimated velocity fields. In this paper, the use of weighted median filtering is suggested to enhance motion estimation. Information about the confidence of pixel velocities is exploited for the design of median filtering weights, so as to enhance the estimation across boundaries, thus resulting in a better segmentation of the velocity field. A new approach to the estimation of optical flow fields is described, coupling the simplicity of a spatial filtering with the accuracy of statistical techniques based on confidence measurements. A rough vector field is first estimated by means of an LS technique. Refinement is then achieved through weighted median filtering, either vector or componentwise scalar. Experimental results show the effectiveness of the weighted median approach: performances have been evaluated both on synthetic image sequences and verified on real world video sequences. Although vector filtering is generally more accurate than scalar filtering, it is less robust to noise than componentwise filtering.
Digital watermarking has been indicated as a technique in the position to cope with the problem of Intellectual Property Rights (IPR) protection of images; this result should be achieved by embedding into the data an unperceivable digital code, namely the watermark, carrying information about the copyright status of the work to be protected. In this paper, the practical feasibility of IPR protection through digital watermarking is investigated. The most common requirements application scenarios impose to the watermarking technology to satisfy are discussed. Watermarking schemes are first classified according to the approach used to extract the embedded code and then the impact, such a classification has on watermark usability, is investigated form an application point of view. As it will be shown, the effectiveness of watermarking as an IPR protection tool turns out to be heavily affected by the detection strategy, which as to be carefully matched to the application at hand. Finally, the practical case of the Tuscany and Gifu Art Virtual Gallery has been considered in detail, to further explain in which manner a watermarking technique can be actually used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.