The Digimarc® Barcode is a digital watermark applied to packages and variable data labels that carries GS1 standard GTIN-14 data traditionally carried by a 1-D barcode. The Digimarc Barcode can be read with smartphones and imaging-based barcode readers commonly used in grocery and retail environments. Using smartphones, consumers can engage with products and retailers can materially increase the speed of check-out, increasing store margins and providing a better experience for shoppers. Internal testing has shown an average of 53% increase in scanning throughput, enabling 100’s of millions of dollars in cost savings [1] for retailers when deployed at scale. To get to scale, the process of embedding a digital watermark must be automated and integrated within existing workflows. Creating the tools and processes to do so represents a new challenge for the watermarking community. This paper presents a description and an analysis of the workflow implemented by Digimarc to deploy the Digimarc Barcode at scale. An overview of the tools created and lessons learned during the introduction of technology to the market are provided.
This paper reports on the implementation of the Digimarc® Discover platform on Google Glass, enabling the reading of a watermark embedded in a printed material or audio. The embedded watermark typically contains a unique code that identifies the containing media or object and a synchronization signal that allows the watermark to be read robustly. The Digimarc Discover smartphone application can read the watermark from a small portion of printed image presented at any orientation or reasonable distance. Likewise, Discover can read the recently introduced Digimarc Barcode to identify and manage consumer packaged goods in the retail channel. The Digimarc Barcode has several advantages over the traditional barcode and is expected to save the retail industry millions of dollars when deployed at scale. Discover can also read an audio watermark from ambient audio captured using a microphone. The Digimarc Discover platform has been widely deployed on the iPad, iPhone and many Android-based devices, but it has not yet been implemented on a head-worn wearable device, such as Google Glass. Implementing Discover on Google Glass is a challenging task due to the current hardware and software limitations of the device. This paper identifies the challenges encountered in porting Discover to the Google Glass and reports on the solutions created to deliver a prototype implementation.
KEYWORDS: Digital watermarking, Signal to noise ratio, Video, Video compression, Image compression, Sensors, Visibility, Cesium, Image processing, Error analysis
A persistent challenge with imagery captured from Unmanned Aerial Systems (UAS), is the loss of
critical information such as associated sensor and geospatial data, and prioritized routing information
(i.e., metadata) required to use the imagery effectively. Often, there is a loss of synchronization between
data and imagery. The losses usually arise due to the use of separate channels for metadata, or due to
multiple imagery formats employed in the processing and distribution workflows that do not preserve
the data. To contend with these issues and provide another layer of authentication, digital watermarks
were inserted at point of capture within a tactical UAS. Implementation challenges included
traditional requirements surrounding, image fidelity, performance, payload size, robustness and
application requirements such as power consumption, digital to analog conversion and a fixed
bandwidth downlink, as well as a standard-based approach to geospatial exploitation through a serviceoriented-
architecture (SOA) for extracting and mapping mission critical metadata from the video
stream. The authors capture the application requirements, implementation trade-offs and ultimately
analysis of selected algorithms. A brief summary of results is provided from multiple test flights onboard
the SkySeer test UAS in support of Command, Control, Communications, Computers,
Intelligence, Surveillance and Reconnaissance applications within Network Centric Warfare and
Future Combat Systems doctrine.
The proliferation of mobile imaging devices combined with Moore's law has yielded a class of devices that are capable of imaging and/or validating many First- and Second-Line security features. Availability of these devices at little or no cost due to economic models and commoditization of constituent technologies will result in a broad infrastructure of devices capable of identifying fraud and counterfeiting. The presence of these devices has the potential to influence aspects of design, production, and usage models for value documents as both a validation tool and as a mechanism for attack. To maximize usability as a validation tool, a better understanding is needed about the imaging capabilities of these devices and which security features and design approaches favor them. As a first step in this direction, the authors investigated using a specific imaging-equipped cell phone as an inspection and validation tool for identity documents. The goal of the investigation was to assess the viability of the device to identify photo swapping, image alteration, data alteration, and counterfeiting of identity documents. To do so security printing techniques such as digital watermarking, microprinting and a Diffractive Optically Variable Image Device were
used. Based on analysis of a representative imaging-equipped cell phone (Fujitsu 900i), the authors confirmed that within some geographies, deployed devices are capable of imaging value documents at sufficiently high resolution to enable inspection and validation usage models across a limited set of security features.
With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.