This paper explores techniques that involve the use of the embedder's knowledge of the cover work to help determine the watermark signal to be added to it. While the receiver always seeks to maximize a detection statistic which is a function of an apriori known pseudorandom sequence, the signal added to the cover work by the embedder is allowed to vary, on a per-chip basis, based upon the characteristics of the cover work. Although adaptation of an added watermark signal can be aimed at minimization of visual artifacts, this paper focuses on adaptation of the watermark signal to improve the readability of the signal outside of any human visual system constraints. This idea can be applied in various scenarios. Two specific examples are discussed. When source models are available and maximum-likelihood detection is used, the added watermark signal can be allowed to adapt to host signal variations in order to maximize the likelihood ratio detection statistic used at the receiver. Another instance where per-chip variation can be put to use is when a pre-filter is used to suppress the cover work prior to reading the watermark signal. In this case, the watermark signal is varied in such a way as to maximize the signal at the output of the pre-filter.
KEYWORDS: Digital watermarking, Cameras, Point spread functions, CCD cameras, Sensors, Digital cameras, Signal to noise ratio, CMOS cameras, Image processing, Amplifiers
Many articles covering novel techniques, theoretical studies, attacks, and analyses have been published recently in the field of digital watermarking. In the interest of expanding commercial markets and applications of watermarking, this paper is part of a series of papers from Digimarc on practical issues associated with commercial watermarking applications. In this paper we address several practical issues associated with the use of web cameras for watermark detection. In addition to the obvious issues of resolution and sensitivity, we explore issues related to the tradeoff between gain and integration time to improve sensitivity, and the effects of fixed pattern noise, time variant noise, and lens and Bayer pattern distortions. Furthermore, the ability to control (or at least determine) camera characteristics including white balance, interpolation, and gain have proven to be critical to successful application of watermark readers based on web cameras. These issues and tradeoffs are examined with respect to typical spatial-domain and transform-domain watermarking approaches.
A common application of digital watermarking is to encode a small packet of information in an image, such as some form of identification that can be represented as a bit string. One class of digital watermarking techniques employs spread spectrum like methods where each bit is redundantly encoded throughout the image in order to mitigate bit errors. We typically require that all bits be recovered with high reliability to effectively read the watermark. In many watermarking applications, however, straightforward application of spread spectrum techniques is not enough for reliable watermark recovery. We therefore resort to additional techniques, such as error correction coding. As proposed by M. Kutter 1, M-ary modulation is one such technique for decreasing the probability of error in watermark recovery. It1 was shown that M-ary modulation techniques could provide performance improvement over binary modulation, but direct comparisons to systems using error correction codes were not made. In this paper we examine the comparative performance of watermarking systems using M-ary modulation and watermarking systems using binary modulation combined with various forms of error correction. We do so in a framework that addresses both computational complexity and performance issues.
Quantization Index Modulation (QIM) has been shown to be a promising method of digital watermarking. It has recently been argued that a version of QIM can provide the best information embedding performance possible in an information theoretic sense. This performance can be demonstrated via random coding using a sequence of vector quantizers of increasing block length, with both channel capacity and optimal rate-distortion performance being reached in the limit of infinite quantizer block length. For QIM, the rate-distortion performance of the component quantizers is unimportant. Because the quantized values are not digitally encoded in QIM, the number of reconstruction values in each quantizer is not a design constraint, as it is in the design of a conventional quantizer. The lack of a rate constraint in QIM suggests that quantizer design for QIM involves different condiderations than does quantizer design for rate-distortion performance. Lookabaugh has identified three types of advantages of vector quantizers vs. scalar quantizers. These advantages are called the space-filling, shape, and memory advantages. This paper investigates whether all of these advantages are useful in the context of QIM. QIM performance of various types of quantizers is presented and a heuristic sphere-packing argument is used to show that, in the case of high-resolution quantization and a Gaussian attack channel, only the space-filling advantage is necessary for nearly optimal QIM performance. This is important because relatively simple quantizers are available that do not provide shape and memory gain but do give a space-filling gain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.