PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8653, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Quality Evaluation Methods/Standards for Mobile and Digital Photography I: Joint Session with Conferences 8653, 8660, and 8667C
In this paper, we present a no-reference quality assessment algorithm for JPEG2000-compressed images called
EDIQ (EDge-based Image Quality). The algorithm works based on the assumption that the quality of JPEG2000-
compressed images can be evaluated by separately computing the quality of the edge/near-edge regions and
the non-edge regions where no edges are present. EDIQ first separates the input image into edge/near-edge
regions and non-edge regions by applying Canny edge detection and edge-pixel dilation. Our previous sharpness
algorithm, FISH [Vu and Chandler, 2012], is used to generate a sharpness map. The part of the sharpness map
corresponding to the non-edge regions is collapsed by using root mean square to yield the image quality index of
the non-edge regions. The other part of the sharpness map, which corresponds to the edge/near-edge regions, is
weighted by the local RMS contrast and the local slope of magnitude spectrum to yield an enhanced quality map,
which is then collapsed into the quality index of the edge/near-edge regions. These two indices are combined by
a geometric mean to yield a quality indicator of the input image. Testing on the JPEG2000-compressed subsets
of four different image-quality databases demonstrate that EDIQ is competitive with other no-reference image
quality algorithms on JPEG2000-compressed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Quality Evaluation Methods/Standards for Mobile and Digital Photography II: Joint Session with Conferences 8653, 8660, and 8667C
This article presents a system and a protocol to characterize image stabilization systems both for still images and videos.
It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is
programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on
different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the
texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the
measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured
in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to
obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of
performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel
accuracy to determine a homographic deformation between the current frame and a reference position. This model
describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to
the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as
DSC, DSLR and smartphones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of remotely sensed hyperspectral images is not easily assessed visually, as the value of the imagery is
primarily inherent in the spectral information embedded in the data. In the context of earth observation or defense
applications, hyperspectral images are generally defined as high spatial resolution (1 to 30 meter pixels) imagery
collected in dozens to hundreds of contiguous narrow (~100) spectral bands from airborne or satellite platforms.
Two applications of interest are unmixing which can be defined as the retrieval of pixel constituent materials (usually
called endmembers) and the area fraction represented by each, and subpixel detection, which is the ability to detect
spatially unresolved objects. Our approach is a combination of empirical analyses of airborne hyperspectral imagery
together with system modeling driven by real input data. Initial results of our study show the dominance of spatial
resolution in determining the ability to detect subpixel objects and the necessity of sufficient spectral range for unmixing
accuracy. While these results are not unexpected, the research helps to quantify these trends for the situations studied.
Future work is aimed at generalizing these results and to provide new prediction tools to assist with hyperspectral
imaging sensor design and operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people
onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC
video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry
but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence
of enough facial information remaining in the compressed image to allow a specialist to identify a person. The
investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on
content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4)
Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly
dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress,
requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon
the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal
‘average’ bit-rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the utility of visual acuity as a video quality metric for public safety applications. An
experiment has been conducted to track the relationship between visual acuity and the ability to perform a
forced-choice object recognition task with digital video of varying quality. Visual acuity is measured according
to the smallest letters reliably recognized on a reduced LogMAR chart.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The stereoscopic media prevails in various fields and its biological effect is investigated by various researches. However,
the evaluation of biological effect on luminance of stereoscopic displays is still unknown. Therefore we evaluated
biological effect on luminance of stereoscopic displays by conducting subjective evaluation and objective evaluation. We
measured pupillary light reflex and R-R interval in electrocardiogram (ECG) as objective evaluation, and conducted
double stimuli evaluation and single stimulus evaluation. The significant effect by the luminance change was shown by
double stimuli evaluation whereas it was not shown by single stimulus evaluation. Based on this result, it was shown that
double stimuli evaluation is suited for evaluation of biological effect on luminance of stereoscopic displays than single
stimulus evaluation. No significant relationship was noted in the results between the pupillary light reflex and the
luminance. Although significant relationship was obtained between the R-R interval in ECG and the elapsed time for 30
min stimulus, no significant relationship was noted in the results between the R-R interval in ECG and the luminance. In
addition, we confirmed experimental accuracy and reproducibility by conducting repetitive experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In today's context, where 3D content is more abundant than ever and its acceptance by the public is probably
de_nitive, there are many discussions on controlling and improving the 3D quality. But what does this notion
represent precisely? How can it be formalized and standardized? How can it be correctly evaluated? A great
number of studies have investigated these matters and many interesting approaches have been proposed. Despite
this, no universal 3D quality model has been accepted so far that would allow a uniform across studies assessment
of the overall quality of 3D content, as it is perceived by the human observers.
In this paper, we are making a step forward in the development of a 3D quality model, by presenting the
results of an exploratory study in which we started from the premise that the overall 3D perceived quality is a
multidimensional concept that can be explained by the physical characteristics of the 3D content. We investigated
the spontaneous impressions of the participants while watching varied 3D content, we analyzed the key notions
that appeared in their discourse and identi_ed correlations between their judgments and the characteristics of
our database. The test proved to be rich in results. Among its conclusions, we consider of highest importance
the fact that we could thus determine three di_erent perceptual attributes ( image quality, comfort and realism
( that could constitute a _rst simplistic model for assessing the perceived 3D quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new method of measuring physical texture distortions (PhTD) to evaluate the performance
of high de_nition (HD) camcorders w.r.t. motion and lossy compression. It is extended to measure perceptual
texture distortions (PeTD) by taking into account the spatio-velocity contrast sensitivity function of the human
visual system. The PhTD gives an objective (physical) distortion of texture structures, while the PeTD measures
the perceptual distortion of textures. The dead leaves chart, invariant to scaling, translation, rotation, and
contrast, was selected as a target texture. The PhTD/PeTD metrics of the target distorted by camcorders were
measured based on a bank of Gabor _lters with eight orientations and three scales. Experimental results for six
HD camcorders from three vendors showed: 1) the PhTD value increases monotonically w.r.t. the motion speed,
and decreases monotonically w.r.t. the lossy compression bitrate; 2) the PeTD value decreases monotonically
w.r.t. the motion speed, but stays almost constant w.r.t. the lossy compression bitrate. The experiment gives a
reasonable result even if the distortions are not radially symmetric. However, some subjective tests should be
done in future work to validate the performance of the perceptual texture distortion metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's
multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the
underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that
mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video
quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test
sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets
seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive
Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The
results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas
the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content
and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE
of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing
QoE differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of our research is to specify experimentally and further model spatial frequency response functions, which
quantify human sensitivity to spatial information in real complex images. Three visual response functions are measured:
the isolated Contrast Sensitivity Function (iCSF), which describes the ability of the visual system to detect any spatial
signal in a given spatial frequency octave in isolation, the contextual Contrast Sensitivity Function (cCSF), which
describes the ability of the visual system to detect a spatial signal in a given octave in an image and the contextual Visual
Perception Function (VPF), which describes visual sensitivity to changes in suprathreshold contrast in an image. In this
paper we present relevant background, along with our first attempts to derive experimentally and further model the VPF
and CSFs. We examine the contrast detection and discrimination frameworks developed by Barten, which we find
provide a sound starting position for our own modeling purposes. Progress is presented in the following areas:
verification of the chosen model for detection and discrimination; choice of contrast metrics for defining contrast
sensitivity; apparatus, laboratory set-up and imaging system characterization; stimuli acquisition and stimuli variations;
spatial decomposition; methodology for subjective tests. Initial iCSFs are presented and compared with ‘classical’
findings that have used simple visual stimuli, as well as with more recent relevant work in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of our research is to specify experimentally and further model spatial frequency
response functions, which
quantify human sensitivity to spatial information in real complex images. Three visual response
functions are measured: the isolated Contrast Sensitivity Function (iCSF), which describes the
ability of the visual system to detect any spatial signal in a given spatial frequency
octave in isolation, the contextual Contrast Sensitivity Function (cCSF), which describes the
ability of the v isual system to detect a spatial signal in a given octave in an image and the
contextual Visual Perception Function (VPF), which describes visual sensitivity to changes in
suprathreshold contrast in an image. In this paper we present relevant background, along with
our first attempts to derive experimentally and further model the VPF and CSFs. We examine
the contrast detection and discrimination frameworks developed by Barten, which we find prov
ide a sound starting position for our own modeling purposes. Progress is presented
in the following areas: verification of the chosen model for detection and discrimination;
choice of contrast metrics for defining contrast sensitivity; apparatus, laboratory set-up
and imaging system characterization; stimuli acquisition and stimuli variations; spatial
decomposition; methodology for subjective tests. Initial iCSFs are presented and compared
with 'classical'
findings that hav e used simple visual stimuli, as well as with more recent relevant work in
the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the problem of Image Quality Assessment of no reference metrics,
focusing on JPEG
corrupted images. In general no reference metrics are not able to measure with the same
performance the distortions within their possible range and with respect to different image
contents. The crosstalk between content and distortion signals influences the human perception.
We here propose two strategies to improve the correlation between subjective and objective
quality data. The first strategy is based on grouping the images according to their spatial
complexity. The second one is based on a frequency analysis. Both the strategies are tested on
two databases available in the literature. The results show an improvement in the correlations
between
no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation
Coefficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System Analysis and Objective Image Quality Metrics
Modeling only a HDR’s camera’s lens blur, noise and sensitivity is not sufficient to predict image quality. For a fuller
prediction, motion blur/artifacts must be included. Automotive applications are particularly challenging for HDR motion
artifacts. This paper extends a classic camera noise model to simulate motion artifacts. The motivation is to predict,
visualize and evaluate the motion/lighting flicker artifacts for different image sensor readout architectures. The proposed
motion artifact HDR simulator has 3 main components; a dynamic image source, a simple lens model and a line based
image sensor model. The line based nature of image sensor provides an accurate simulation of how different readout
strategies sample movement or flickering lights in a given scene. Two simulation studies illustrating the model’s
performance are presented. The first simulation compares the motion artifacts of frame sequential and line interleaved
HDR readout while the second study compares the motion blur of an 8MP 1.4μm, 5MP 1.75μm and 3MP 2.2μm image
sensors under the same illumination level. Good alignment is obtained between the expected and simulated results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Document scanner illumination has evolved along with general illumination technologies. LEDs have become more and
more popular as the illumination sources for document scanning. LED technologies provide a wide range of choices both
in terms of structural design and spectral compositions. In this report, we examine some popular LED technologies used
for document scanner. We evaluate the color rendering performance of scanner models with different illumination
technologies by examining their rendering of the Macbeth ColorChecker™ in sRGB. We found that more phosphors in
phosphor conversion types of white LEDs may not be necessarily advantageous in terms of scanner color rendering
performance. Also CIS type of scanner may be sensitive to the peak wavelength shift and can be particularly problematic
when the peaks are out of certain range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image texture is the term given to the information-bearing fluctuations such as those for skin, grass and fabrics. Since
image processing aimed at reducing unwanted fluctuations (noise are other artifacts) can also remove important texture,
good product design requires a balance between the two. The texture-loss MTF method, currently under international
standards development, is aimed at the evaluation of digital and mobile-telephone cameras for capture of image texture.
The method uses image fields of pseudo-random objects, such as overlapping disks, often referred to as ‘dead-leaves’
targets. The analysis of these target images is based on noise-power spectrum (NPS) measurements, which are subject to
estimation error. We describe a simple method for compensation of non-stationary image statistics, aimed at improving
practical NPS estimates. A benign two-dimensional linear function (plane) is fit to the data and subtracted. This method
was implemented and results were compared with those without compensation. The adapted analysis method resulted in
reduced NPS and MTF measurement variation (20%) and low-frequency bias error. This is a particular advantage at low
spatial frequencies, where texture-MTF scaling is performed. We conclude that simple trend removal should be used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the results of a study designed to investigate the effectiveness of peak signal-to-noise
ratio (PSNR) as a quality estimator when measured in various feature domains. Although PSNR is well known to
be a poor predictor of image quality, PSNR has been shown be quite effective for additive, pixel-based distortions.
We hypothesized that PSNR might also be effective for other types of distortions which induce changes to other
visual features, as long as PSNR is measured between local measures of such features. Given a reference and
distorted image, five feature maps are measured for each image (lightness distance, color distance, contrast, edge
strength, and sharpness). We describe a variant of PSNR in which quality is estimated based on the extent to
which these feature maps for the reference image differ from the corresponding maps for the distorted image.
We demonstrate how this feature-based approach can lead to improved estimators of image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new method for blind/no-reference image quality assessment based on the log-
derivative statistics of natural scenes. The new method, called DErivative Statistics-based Image QUality Eval-
uator (DESIQUE), extracts image quality-related statistical features at two image scales in both the spatial and
frequency domains, upon which a two-stage framework is employed to evaluate image quality. In the spatial
domain, normalized luminance values of an image are modeled in two ways: point-wise based statistics for sin-
gle pixel values and pairwise-based log-derivative statistics for the relationship of pixel pairs. In the frequency
domain, log-Gabor filters are used to extract the high frequency component of an image, which is also modeled
by the log-derivative statistics. All of these statistics are characterized by a generalized Gaussian distribution
model, the parameters of which form the underlying features of the proposed method. Experiment results show
that DESIQUE not only leads to considerable performance improvements, but also maintains high computational
efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Grain is one of several attributes described in ISO/IEC TS 24790, a technical specification for the measurement of
image quality for monochrome printed output. It defines grain as aperiodic fluctuations of lightness greater than
0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC
13660. Since this definition places no bounds on the upper frequency range, higher-frequency fluctuations (such
as those from the printer’s halftone pattern) could contribute significantly to the measurement of grain artifacts.
In a previous publication, we introduced a modification to the ISO/IEC 13660 grain measurement algorithm
that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations.
This modification improves the algorithm’s correlation with the subjective evaluation of experts who rated the
severity of printed grain artifacts.
Seeking to improve upon the grain algorithm in ISO/IEC 13660, the ISO/IEC TS 24790 committee evaluated
several graininess metrics. This led to the selection of the above wavelet-based approach as the top candidate
algorithm for inclusion in a future ISO/IEC standard. Our recent experimental results showed r2 correlation
of 0.9278 between the wavelet-based approach and the subjective evaluation conducted by the ISO committee
members based upon 26 samples covering a variety of printed grain artifacts. On the other hand, our experiments
on the same data set showed much lower correlation (r2 = 0.3555) between the ISO/IEC 13660 approach and
the same subjective evaluation of the ISO committee members.
In addition, we introduce an alternative approach for measuring grain defects based on spatial frequency analysis
of wavelet-filtered images. Our goal is to establish a link between the spatial-based grain (ISO/IEC TS 24790)
approach and its equivalent frequency-based one in light of Parseval’s theorem. Our experimental results showed
r2 correlation near 0.99 between the spatial and frequency-based approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser electrophotographic printers are complex systems that can generate prints with a number of possible
artifacts that are very di_erent in nature. It is a challenging task to develop a single processing algorithm that
can effectively identify such a wide range of print quality defects.
In this paper, we describe an image processing and analysis pipeline that can effectively assess the presence
of a wide range of artifacts, as a general approach. In our paper, we will discuss in detail the algorithm that
comprises the image processing and analysis pipeline, and will illustrate the efficacy of the pipeline with a number
of examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we deal with a new Technical Specification providing a method for the objective measurement of print
quality characteristics that contribute to perceived printer resolution, “ISO/IEC TS 29112:2012: Information Technology
– Office equipment – Test charts and Methods for Measuring Monochrome Printer Resolution”. The Technical
Specification aims at electrophotography monochrome printing systems. Since the referred measures should show
system or technology independence inkjet printing systems are included as well in our study. In order to verify if given
objective methods correlate well with human perception, a psychophysical experiment has been conducted, and the
objective methods have been compared against the perceptual data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser electrophotographic printers are complex systems with many rotating components that are used to advance
the media, and facilitate the charging, exposure, development, transfer, fusing, and cleaning steps. Irregularities
that are constant along the axial direction of a roller or drum, but which are localized in circumference can give
rise to distinct isolated bands in the output print that are constant in the scan direction, and which may or may
not be observed to repeat at an interval in the process direction that corresponds to the circumference of the
roller or drum that is responsible for the artifact.
In this paper, we describe an image processing and analysis pipeline that can effectively assess the presence
of isolated periodic and aperiodic bands in the output from laser electrophotographic printers. In our paper, we
will discuss in detail the algorithms that comprise the image processing and analysis pipeline, and will illustrate
the efficacy of the pipeline with an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to use scientific expert evidence in court hearings, several criteria must be met. In the US jurisdiction the
Daubert decision2 has defined several criteria that might be assessed if a testimony is challenged. In particular
the potential for testing or actual testing, as well as known or potential error rate are two very important
criteria. In order to be able to compare the results with each other, the reproducible creation of evaluation
samples is necessary. However, each latent fingerprint is unique due to external inuence factors such as sweat
composition or pressure during the application of a trace. Hence, Schwarz1 introduces a method to print latent
fingerprints using ink jet printers equipped with artificial sweat. In this paper we assess the image quality in
terms of reproducibility and clarity of the printed artificial sweat patterns. For that, we determine the intra class
variance from one printer on the same and on different substrates based on a subjective assessment, as well as
the inter class variance between different printers of the same model using pattern recognition techniques. Our
results indicate that the intra class variance is primarily inuenced by the drying behavior of the amino acid.
The inter class is surprisingly large between identical models of one printer. Our evaluation is performed using
100 samples on an overhead foil and 50 samples on a compact disk surface with 5 different patterns (two line
structures, a fingerprint image and two di_erent arrows with a larger area with amino acid) acquired with a
Keyence VK-X110 laser scanning confocal microscope.11 The results show a significant difference between the
two identical printers allowing for differentiating between them with an accuracy of up to 99%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer
products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by
scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric:
linear regression and the support vector machine. We have implemented the image quality ruler, based on the
recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and
20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a
set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest
that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined
and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale
non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used
these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages
to train the two different predictors - one based on linear regression and the other based on the support vector
machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelets are a powerful tool that can be applied to problems in image processing and analysis. They provide a multi-scale
decomposition of an original image into average terms and detail terms that capture the characteristics of the image at
different scales. In this project, we develop a figure of merit for macro-uniformity that is based on wavelets. We use the
Haar basis to decompose the image of the scanned page into eleven levels. Starting from the lowest frequency level, we
group the eleven levels into three non-overlapping separate frequency bands, each containing three levels. Each frequency
band image consists of the superposition of the detail images within that band. We next compute 1-D horizontal and
vertical projections for each frequency band image. For each frequency band image projection, we develop a structural
approximation that summarizes the essential visual characteristics of that projection. For the coarsest band comprising
levels 9,10,11, we use a generalized square-wave approximation. For the next coarsest band comprising levels 6,7,8, we
use a piecewise linear spline approximation. For the finest bands comprising levels 3,4,5, we use a spectral decomposition.
For each 1-D approximation signal, we define an appropriate set of scalar-valued features. These features are used to
design two predictors one based on linear regression and the other based on the support vector machine, which are trained
with data from our image quality ruler experiments with human subjects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper is devoted to the algorithm for generation of PDF with vector symbols from scanned documents. The complex
multi-stage technique includes segmentation of the document to text/drawing areas and background, conversion of
symbols to lines and Bezier curves, storing compressed background and foreground. In the paper we concentrate on
symbol conversion that comprises segmentation of symbol bodies with resolution enhancement, contour tracing and
approximation. Presented method outperforms competitive solutions and secures the best compression rate/quality ratio.
Scaling of initial document to other sizes as well as several printing/scanning-to-PDF iterations expose advantages of
proposed way for handling with document images. Numerical vectorization quality metric was elaborated. The outcomes
of OCR software and user opinion survey confirm high quality of proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CEA Valduc uses several X-Ray generators to carry out many inspections: void search, welding expertise, gap
measurements, etc. Most of these inspections are carried out on silver based plates. For several years, the CEA/Valduc
has decided to qualify new devices such as digital plates or CCD/flat panel plates. On one hand, the choice of this
technological orientation is to forecast the assumed and eventual disappearance of silver based plates; on the other hand,
it is also to keep our skills mastering up-to-date.
The main improvement brought by numerical plates is the continuous progress of the measurement accuracy, especially
with image data processing. It is now common to measure defects thickness or depth position within a part. In such
applications, data image processing is used to obtain complementary information compared to scanned silver based
plates. This scanning procedure is harmful for measurements which imply a data corruption of the resolution, the adding
of numerical noise and is time expensive. Digital plates enable to suppress the scanning procedure and to increase
resolution. It is nonetheless difficult to define, for digital images, single criteria for the image quality. A procedure has to
be defined in order to estimate quality of the digital data itself; the impact of the scanning device and the configuration
parameters are also to be taken into account.
This presentation deals with the qualification process developed by CEA/Valduc for digital plates (DUR-NDT) based on
the study of quantitative criteria chosen to define a direct numerical image quality that could be compared with scanned
silver based pictures and the classical optical density.
The versatility of the X-Ray parameters is also discussed (X-ray tension, intensity, time exposure). The aim is to be able
to transfer the year long experience of CEA/Valduc with silver-based plates inspection to these new digital plates
supports. This is an industrial stake.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image quality assessment as perceived by humans is of crucial importance in numerous fields of image processing.
Transmission and storage of digital media require efficient methods to reduce the large number of bits to store an image,
while maintaining sufficiently high quality compared to the original image. Since subjective evaluations cannot be
performed in various scenarios, it is necessary to have objective metrics that predict image quality consistent with human
perception. However, objective metrics that considers high levels of the human visual system are still limited. In this
paper, we investigate the possibility of automatically predict, based on saliency maps, the minimum image quality
threshold from which humans can perceive the elements on a compressed image. We conducted a series of experimental
subjective tests where human observers have been exposed to compressed images with decreasing compression rates. To
measure the difference between the saliency maps of the compressed and the original image it was used the normalized
absolute error metric. Our results indicate that the elements on the image are only perceived by most of the human
subjects not at a specific compressed image quality level, but depending on a saliency map difference threshold.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An essential part of characterizing and improving imaging system performance and modeling is the determination of spectral responsivity; namely the spectral band-shape and out-of-band response. These complicated measurements have heretofore been difficult to make with consistency with do-it-yourself solutions. To address this industry-wide problem, Labsphere has developed an automated spectral response measurement stations, incorporating several techniques to enhance accuracy and ease of use. This presentation will cover the physics and considerations behind the scaling of these types of systems and the experimental methodology required to assure absolute traceability, as well as some of the lessons learned along the way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.