PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7446, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-modal microscopy, such as combined bright-fiel and multi-color fluorescenc imaging, allows capturing a sample's
anatomical structure, cell dynamics, and molecular activity in distinct imaging channels. However, only a limited number
of channels can be acquired simultaneously and acquiring each channel sequentially at every time-point drastically reduces
the achievable frame rate. Multi-modal imaging of rapidly moving objects (such as the beating embryonic heart), which
requires high frame-rates, has therefore remained a challenge. We have developed a method to temporally register multimodal,
high-speed image sequences of the beating heart that were sequentially acquired. Here we describe how maximizing
the mutual information of time-shifted wavelet coefficien sequences leads to an implementation that is both accurate and
fast. Specificall , we validate our technique on synthetically generated image sequences and show its effectiveness on
experimental bright-fiel and fluorescenc image sequences of the beating embryonic zebrafis heart. This method opens
the prospect of cardiac imaging in multiple channels at high speed without the need for multiple physical detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an active mask segmentation framework that combines the advantages of statistical modeling,
smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution
and active contours respectively. At the crux of this framework is a paradigm shift from evolving
contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active
mask framework is particularly suited to segment digital images. We demonstrate the use of the framework
in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments
reveal that statistical modeling helps the multiple masks converge from a random initial configuration to
a meaningful one. This obviates the need for an involved initialization procedure germane to most of the
traditional methods used to segment fluorescence microscope images. While we provide the mathematical
details of the functions used to segment fluorescence microscope images, this is only an instantiation of the
active mask framework. We suggest some other instantiations of the framework to segment different types
of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we explore the use of anatomical information as a guide in the image formation
process of fluorescence molecular tomography (FMT). Namely, anatomical knowledge obtained
from high resolution computed tomography (micro-CT) is used to construct a model for the
diffusion of light and to constrain the reconstruction to areas candidate to contain fluorescent
volumes. Moreover, a sparse regularization term is added to the state-of-the-art least square
solution to contribute to the sparsity of the localization. We present results showing the increase
in accuracy of the combined system over conventional FMT, for a simulated experiment of lung
cancer detection in mice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Noise level and photobleaching are cross-dependent problems in biological fluorescence microscopy. Indeed,
observation of fluorescent molecules is challenged by photobleaching, a phenomenon whereby the fluorophores
are degraded by the excitation light. One way to control this process is by reducing the intensity of the light or the
time exposure, but it comes at the price of decreasing the signal-to-noise ratio (SNR). Although a host of denoising
methods have been developed to increase the SNR, most are post-processing techniques and require full data
acquisition. In this paper we propose a novel technique, based on Compressed Sensing (CS) that simultaneously
enables reduction of exposure time or excitation light level and improvement of image SNR. Our CS-based
method can simultaneously acquire and denoise data, based on statistical properties of the CS optimality, noise
reconstruction characteristics and signal modeling applied to microscopy images with low SNR. The proposed
approach is an experimental optimization combining sequential CS reconstructions in a multiscale framework
to perform image denoising. Simulated and practical experiments on fluorescence image data demonstrate that
thanks to CS denoising we obtain images with similar or increased SNR while still being able to reduce exposure
times. Such results open the gate to new mathematical imaging protocols, offering the opportunity to reduce
photobleaching and help biological applications based on fluorescence microscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of estimating the channel response between multiple source receiver pairs all sensing the
same medium. A different pulse is sent from each source, and the response is measured at each receiver. If each
source sends its pulse while the others are silent, estimating the channel is a classical deconvolution problem.
If the sources transmit simultaneously, estimating the channel requires "inverting" an underdetermined system
of equations. In this paper, we show how this second scenario relates to the theory of compressed sensing. In
particular, if the pulses are long and random, then the channel matrix will be a restricted isometry, and we
can apply the tools of compressed sensing to simultaneously recover the channels from each source to a single
receiver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers a simple on-off random multiple access channel, where n users communicate simultaneously
to a single receiver over m degrees of freedom. Each user transmits with probability λ, where typically λn<m(symbol)n, and the receiver must detect which users transmitted. We show that when the codebook has i.i.d.
Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection
problem considered in compressed sensing. Using recent sparsity results, we derive upper and lower bounds
on the capacities of these channels. We show that common sparsity detection algorithms, such as lasso and
orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly
better performance than single-user detection. These methods do achieve some near-far resistance but-at high
signal-to-noise ratios (SNRs) - may achieve capacities far below optimal maximum likelihood detection. We then
present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power
ordering or power shaping can significantly improve the high SNR performance. Sequential OMP is analogous
to successive interference cancellation in the classic multiple access channel. Our results thereby provide insight
into the roles of power control and multiuser detection on random-access signaling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Suppose the signal x ∈ Rn is realized by driving a d-sparse signal z ∈ Rn through an arbitrary unknown
stable discrete-linear time invariant system H, namely, x(t) = (h * z)(t), where h(·) is the impulse response of
the operator H. Is x(·) compressible in the conventional sense of compressed sensing? Namely, can x(t) be
reconstructed from sparse set of measurements. For the case when the unknown system H is auto-regressive (i.e.
all pole) of a known order it turns out that x can indeed be reconstructed from O(k log(n)) measurements. The
main idea is to pass x through a linear time invariant system G and collect O(k log(n)) sequential measurements.
The filter G is chosen suitably, namely, its associated Toeplitz matrix satisfies the RIP property. We develop a
novel LP optimization algorithm and show that both the unknown filter H and the sparse input z can be reliably
estimated. These types of processes arise naturally in Reflection Seismology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present steerlets, a new class of wavelets which allow us to define wavelet transforms that are covariant with
respect to rigid motions in d dimensions. The construction of steerlets is derived from an Isotropic Multiresolution
Analysis, a variant of a Multiresolution Analysis whose core subspace is closed under translations by integers
and under all rotations. Steerlets admit a wide variety of design characteristics ranging from isotropy, that is the
full insensitivity to orientations, to directional and orientational selectivity for local oscillations and singularities.
The associated 2D or 3D-steerlet transforms are fast MRA-type of transforms suitable for processing of discrete
data. The subband decompositions obtained with 2D or 3D-steerlets behave covariantly under the action of the
respective rotation group on an image, so that each rotated steerlet is the linear combination of other steerlets
in the same subband.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shearlab is a Matlab toolbox for digital shearlet transformation of two-D (image) data we developed following
a rational design process. The Pseudo-Polar FFT fits very naturally with the continuum theory of the Shearlet
transform and allows us to translate Shearlet ideas naturally into a digital framework. However, there are
still windows and weights which must be chosen. We developed more than a dozen performance measures
quantifying precision of the reconstruction, tightness of the frame, directional and spatial localization and other
properties. Such quantitative performance metrics allow us to: (a) tune parameters and objectively improve our
implementation; and (b) compare different directional transform implementations. We present and interpret the
most important performance measures for our current implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a new approach for inverse halftoning of error diffused halftones using a shearlet representation.
We formulate inverse halftoning as a deconvolution problem using Kite et al.'s linear approximation
model for error diffusion halftoning. Our method is based on a new M-channel implementation of the shearlet
transform. By formulating the problem as a linear inverse problem and taking advantage of unique properties
of an implementation of the shearlet transform, we project the halftoned image onto a shearlet representation.
We then adaptively estimate a gray-scaled image from these shearlet-toned or shear-tone basis elements in a
multi-scale and anisotropic fashion. Experiments show that, the performance of our method improves upon
many of the state-of-the-art inverse halftoning routines, including a wavelet-based method and a method that
shares some similarities to a shearlet-type decomposition known as the local polynomial approximation (LPA)
technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The variational approach to signal restoration calls for the minimization of a cost function that is the sum of a
data fidelity term and a regularization term, the latter term constituting a 'prior'. A synthesis prior represents the
sought signal as a weighted sum of 'atoms'. On the other hand, an analysis prior models the coefficients obtained
by applying the forward transform to the signal. For orthonormal transforms, the synthesis prior and analysis
prior are equivalent; however, for overcomplete transforms the two formulations are different. We compare
analysis and synthesis ℓ1-norm regularization with overcomplete transforms for denoising and deconvolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main challenges of high level analysis of human behavior is the high dimension of the feature space.
To overcome the curse of dimensionality, we propose in this paper, a space curve representation of the high
dimensional behavior features. The features of interest here, are restricted to sequences of shapes of the human
body such as those extracted from a video sequence. This evolution is a one dimensional sub-manifold in shape
space. The central idea of the proposed representation takes root in the Whitney embedding theorem which
guarantees an embedding of a one dimensional manifold in as a space curve. The resulting of such dimension
reduction, is a simplification of comparing two behaviors to that of comparing two curves in R3. This comparison
is additionally theoretically and numerically easier to implement for statistical analysis. By exploiting sampling
theory, we are moreover able to achieve a computationally efficient embedding that is invertible. Specifically,
we first construct a global coordinates expression for the one dimension manifold and sampled along a generating
curve.As experiment result, we provide substantiating modeling examples and illustrations of behavior
classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is a well known fact that the time-frequency domain is very well adapted for representing audio signals. The
main two features of time-frequency representations of many classes of audio signals are sparsity (signals are
generally well approximated using a small number of coefficients) and persistence (significant coefficients are not
isolated, and tend to form clusters). This contribution presents signal approximation algorithms that exploit
these properties, in the framework of hierarchical probabilistic models.
Given a time-frequency frame (i.e. a Gabor frame, or a union of several Gabor frames or time-frequency
bases), coefficients are first gathered into groups. A group of coefficients is then modeled as a random vector,
whose distribution is governed by a hidden state associated with the group.
Algorithms for parameter inference and hidden state estimation from analysis coefficients are described. The
role of the chosen dictionary, and more particularly its structure, is also investigated. The proposed approach
bears some resemblance with variational approaches previously proposed by the authors (in particular the variational
approach exploiting mixed norms based regularization terms).
In the framework of audio signal applications, the time-frequency frame under consideration is a union of
two MDCT bases or two Gabor frames, in order to generate estimates for tonal and transient layers. Groups
corresponding to tonal (resp. transient) coefficients are constant frequency (resp. constant time) time-frequency
coefficients of a frequency-selective (resp. time-selective) MDCT basis or Gabor frame.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While many geological and geophysical processes such as the melting of icecaps, the magnetic expression of
bodies emplaced in the Earth's crust, or the surface displacement remaining after large earthquakes are spatially
localized, many of these naturally admit spectral representations, or they may need to be extracted from data
collected globally, e.g. by satellites that circumnavigate the Earth. Wavelets are often used to study such
nonstationary processes. On the sphere, however, many of the known constructions are somewhat limited. And
in particular, the notion of 'dilation' is hard to reconcile with the concept of a geological region with fixed
boundaries being responsible for generating the signals to be analyzed. Here, we build on our previous work on
localized spherical analysis using an approach that is firmly rooted in spherical harmonics. We construct, by
quadratic optimization, a set of bandlimited functions that have the majority of their energy concentrated in an
arbitrary subdomain of the unit sphere. The 'spherical Slepian basis' that results provides a convenient way for
the analysis and representation of geophysical signals, as we show by example. We highlight the connections to
sparsity by showing that many geophysical processes are sparse in the Slepian basis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in signal processing have focused on the use of sparse representations in various applications.
A new field of interest based on sparsity has recently emerged: compressed sensing. This theory
is a new sampling framework that provides an alternative to the well-known Shannon sampling theory.
In this paper we investigate how compressed sensing (CS) can provide new insights into astronomical
data compression. In a previous study1 we gave new insights into the use of Compressed Sensing (CS)
in the scope of astronomical data analysis. More specifically, we showed how CS is flexible enough to
account for particular observational strategies such as raster scans. This kind of CS data fusion concept
led to an elegant and effective way to solve the problem ESA is faced with, for the transmission to the
earth of the data collected by PACS, one of the instruments onboard the Herschel spacecraft which will
launched in late 2008/early 2009.
In this paper, we extend this work by showing how CS can be effectively used to jointly decode multiple
observations at the level of map making. This allows us to directly estimate large areas of the sky
from one or several raster scans. Beyond the particular but important Herschel example, we strongly
believe that CS can be applied to a wider range of applications such as in earth science and remote
sensing where dealing with multiple redundant observations is common place. Simple but illustrative
examples are given that show the effectiveness of CS when decoding is made from multiple redundant
observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of reconstruction of astrophysical signals probed by radio interferometers with baselines
bearing a non-negligible component in the pointing direction. The visibilities measured essentially identify with
a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. We
analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary,
in terms of the achievable quality of reconstruction through the Basis Pursuit problem. The present manuscript
represents a summary of recent work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper surveys recent results on frame sequences. The first group of results characterizes the relationships
that hold among various types of dual frame sequences. The second group of results characterizes the relationships
that hold among the major Paley-Wiener perturbation theorems for frame sequences, and some of the properties
that remain invariant under such perturbations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a design procedure for the real, equal-norm, lapped tight frame transforms (LTFTs). These transforms
have been recently proposed as both a redundant counterpart to lapped orthogonal transforms and an
infinite-dimensional counterpart to harmonic tight frames. In addition, LTFTs can be efficiently implemented
with filter banks. The procedure consists of two steps. First, we construct new lapped orthogonal transforms
designed from submatrices of the DFT matrix. Then we specify the seeding procedure that yields real equal-norm
LTFTs. Among them we identify the subclass of maximally robust to erasures LTFTs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Redundant systems such as frames are often used to represent a signal for error correction, denoising
and general robustness. In the digital domain quantization needs to be performed. Given
the redundancy, the distribution of quantization errors can be rather complex. In this paper we
study quantization error for a signal X in Rd represented by a frame using a lattice quantizer. We
characterize the asymptotic distribution of the quantization error as the cell size of the lattice goes
to zero. We apply these results to get the necessary and sufficient conditions for the White Noise
Hypothesis to hold asymptotically in the case of the pulse-code modulation scheme.
This is an abbreviated version of a paper that will appear elsewhere in a regular refereed journal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One key property of frames is their resilience against erasures due to the possibility of generating stable, yet
over-complete expansions. Blind reconstruction is one common methodology to reconstruct a signal when frame
coefficients have been erased. In this paper we introduce several novel low complexity replacement schemes which
can be applied to the set of faulty frame coefficients before blind reconstruction is performed, thus serving as a
preconditioning of the received set of frame coefficients. One main idea is that frame coefficients associated with
frame vectors close to the one erased should have approximately the same value as the lost one. It is shown that
injecting such low complexity replacement schemes into blind reconstruction significantly reduce the worst-case
reconstruction error. We then apply our results to the circle frames. If we allow linear combinations of different
neighboring coefficients for the reconstruction of missing coefficients, we can even obtain perfect reconstruction
for the circle frames under certain weak conditions on the set of erasures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we analyze the use of frames for the transmission and error-correction of analog signals via a
memoryless erasure-channel. We measure performance in terms of the mean-square error remaining after error
correction and reconstruction. Our results continue earlier works on frames as codes which were mostly concerned
with the smallest number of erased coefficients. To extend these works we borrow some ideas from binary coding
theory and realize them with a novel class of frames, which carry a particular fusion frame architecture. We show
that a family of frames from this class achieves a mean-square reconstruction error remaining after corrections
which decays faster than any inverse power in the number of frame coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design of Overcomplete Multidimensional Decompositions
We lay a philosophical framework for the design of overcomplete multidimensional signal decompositions based
on the union of two or more orthonormal bases. By combining orthonormal bases in this way, tight (energy
preserving) frames are automatically produced. The advantage of an overcomplete (tight) frame over a single
orthonormal decomposition is that a signal is likely to have a more sparse representation among the overcomplete
set than by using any single orthonormal basis. We discuss the question of the relationship between pairs of bases
and the various criteria that can be used to measure the goodness of a particular pair of bases. A particular case
considered is the dual-tree Hilbert-pair of wavelet bases. Several definitions of optimality are presented along
with conjectures about the subjective characteristics of the ensembles where the optimality applies. We also
consider relationships between sparseness and approximate representations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show how an overcomplete dictionary may be adapted to the statistics of natural images so as to provide
a sparse representation of image content. When the degree of overcompleteness is low, the basis functions
that emerge resemble those of Gabor wavelet transforms. As the degree of overcompleteness is increased, new
families of basis functions emerge, including multiscale blobs, ridge-like functions, and gratings. When the basis
functions and coefficients are allowed to be complex, they provide a description of image content in terms of local
amplitude (contrast) and phase (position) of features. These complex, overcomplete transforms may be adapted
to the statistics of natural movies by imposing both sparseness and temporal smoothness on the amplitudes.
The basis functions that emerge form Hilbert pairs such that shifting the phase of the coefficient shifts the phase
of the corresponding basis function. This type of representation is advantageous because it makes explicit the
structural and dynamic content of images, which in turn allows later stages of processing to discover higher-order
properties indicative of image content. We demonstrate this point by showing that it is possible to learn the
higher-order structure of dynamic phase - i.e., motion - from the statistics of natural image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an amplitude-phase representation of the dual-tree complex wavelet transform (DT-CWT) which
provides an intuitive interpretation of the associated complex wavelet coefficients. The representation, in particular,
is based on the shifting action of the group of fractional Hilbert transforms (fHT) which allow us to extend
the notion of arbitrary phase-shifts beyond pure sinusoids. We explicitly characterize this shifting action for a
particular family of Gabor-like wavelets which, in effect, links the corresponding dual-tree transform with the
framework of windowed-Fourier analysis.
We then extend these ideas to the bivariate DT-CWT based on certain directional extensions of the fHT. In
particular, we derive a signal representation involving the superposition of direction-selective wavelets affected
with appropriate phase-shifts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many algorithms have been proposed during the last decade in order to deal with inverse problems. Of particular
interest are convex optimization approaches that consist of minimizing a criteria generally composed
of two terms: a data fidelity (linked to noise) term and a prior (regularization) term. As image properties
are often easier to extract in a transform domain, frame representations may be fruitful. Potential functions
are then chosen as priors to fit as well as possible empirical coefficient distributions. As a consequence,
the minimization problem can be considered from two viewpoints: a minimization along the coefficients
or along the image pixels directly. Some recently proposed iterative optimization algorithms can be easily
implemented when the frame representation reduces to an orthonormal basis. Furthermore, it can be noticed
that in this particular case, it is equivalent to minimize the criterion in the transform domain or in the image
domain. However, much attention should be paid when an overcomplete representation is considered. In
that case, there is no longer equivalence between coefficient and image domain minimization. This point
will be developed throughout this paper. Moreover, we will discuss how the choice of the transform may
influence parameters and operators necessary to implement algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach for decomposing a signal into the sum of an oscillatory component and a
transient component. The method uses a newly developed rational-dilation wavelet transform (WT), a self-inverting
constant-Q transform with an adjustable Q-factor (quality-factor). We propose that the oscillatory
component be modeled as signal that can be sparsely represented using a high Q-factor WT; likewise, we
propose that the transient component be modeled as a piecewise smooth signal that can be sparsely represented
using a low Q-factor WT. Because the low and high Q-factor wavelet transforms are highly distinct (having low
coherence), morphological component analysis (MCA) successfully yields the desired decomposition of a signal
into an oscillatory and non-oscillatory component. The method, being non-linear, is not constrained by the limits
of conventional LTI filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the class of complex gradient-Laplace operators, we show the design of a non-separable two-dimensional
wavelet basis from a single and analytically defined generator wavelet function. The wavelet decomposition is
implemented by an efficient FFT-based filterbank. By allowing for slight redundancy, we obtain the Marr
wavelet pyramid decomposition that features improved translation-invariance and steerability. The link with
Marr's theory of early vision is due to the replication of the essential processing steps (Gaussian smoothing,
Laplacian, orientation detection). Finally, we show how to find a compact multiscale primal sketch of the image,
and how to reconstruct an image from it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One very influential method for texture synthesis is based on the steerable pyramid by alternately imposing
marginal statistics on the image and the pyramid's subbands. In this work, we investigate two extensions to this
framework. First, we exploit the steerability of the transform to obtain histograms of the subbands independent
of the local orientation; i.e., we select the direction of maximal response as the reference orientation. Second,
we explore the option of multidimensional histogram matching. The distribution of the responses to various
orientations is expected to capture better the local geometric structure. Experimental results show how the
proposed approach improves the performance of the original pyramid-based synthesis method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the mathematical characterization and wavelet analysis of self-similar random
vector fields. The study consists of two main parts: the construction of random vector models on the basis of
their invariance under coordinate transformations, and a study of the consequences of conducting a wavelet
analysis of such random models. In the latter part, after briefly examining the effects of standard wavelets on
the proposed random fields, we go on to introduce a new family of Laplacian-like vector wavelets that in a
way duplicate the covariant-structure and whitening relations governing our random models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider an extension of the 1-D concept of analytical wavelets to n-D, which is by construction compatible
with rotations. This extension, called the monogenic wavelet, yields a decomposition of the wavelet coefficients
into amplitude, phase, and phase direction, analogous to the decomposition of an analytical wavelet coefficient
into amplitude and phase. We demonstrate the usefulness of this decomposition with two applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we explore the design of 5-band dual frame (overcomplete) wavelets with a dilation factor M = 4.
The resulting limit functions are significantly smoother than their orthogonal counterparts at the same dilation
factor. An advantage of the proposed filters over the dyadic filterbanks (M = 2) is that the proposed filterbanks
result in a reduced redundancy when compared with dyadic frames, while maintaining smoothness. The proposed
filterbanks are symmetric and generate four wavelets and a scaling function for each the synthesis and analysis
limit functions. All wavelets are equipped with at least one vanishing moment each.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept of stable space splittings has been introduced in the early 1990ies, as a convenient framework for
developing a unified theory of iterative methods for variational problems in Hilbert spaces, especially for solving
large-scale discretizations of elliptic problems in Sobolev spaces. The more recently introduced notions of frames
of subspaces and fusion frames turn out to be concrete instances of stable space splittings. However, driven
by applications to robust distributed signal processing, their study has focused so far on different aspects. The
paper surveys the existing results on stable space splittings and iterative methods, outlines the connection to
fusion frames, and discusses the investigation of quarkonial or multilevel partition-of-unity frames as an example
of current interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fusion frame is a frame-like collection of subspaces in a Hilbert space. It generalizes the concept of a frame
system for signal representation. In this paper, we study the existence and construction of fusion frames. We first
introduce two general methods, namely the spatial complement and the Naimark complement, for constructing a
new fusion frame from a given fusion frame. We then establish existence conditions for fusion frames with desired
properties. In particular, we address the following question: Given M, N, m ∈ N and {λj}Mj
=1, does there exist
a fusion frame in RM with N subspaces of dimension m for which {λj}Mj
=1 are the eigenvalues of the associated
fusion frame operator? We address this problem by providing an algorithm which computes such a fusion frame
for almost any collection of parameters M, N, m ∈ N and {λj}Mj
=1. Moreover, we show how this procedure can
be applied, if subspaces are to be added to a given fusion frame to force it to become tight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion frames are an emerging topic of frame theory, with applications to communications and distributed
processing. However, until recently, little was known about the existence of tight fusion frames, much less how
to construct them. We discuss a new method for constructing tight fusion frames which is akin to playing Tetris
with the spectrum of the frame operator. When combined with some easily obtained necessary conditions, these
Spectral Tetris constructions provide a near complete characterization of the existence of tight fusion frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed Sensing (CS) is a new signal acquisition technique that allows sampling of sparse signals using
significantly fewer measurements than previously thought possible. On the other hand, a fusion frame is a new
signal representation method that uses collections of subspaces instead of vectors to represent signals. This work
combines these exciting new fields to introduce a new sparsity model for fusion frames. Signals that are sparse
under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals
using standard CS. The combination provides a promising new set of mathematical tools and signal models useful
in a variety of applications.
With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it
needs not be sparse within each of the subspaces it occupies. We define a mixed ℓ1/ℓ2 norm for fusion frames.
A signal sparse in the subspaces of the fusion frame can thus be sampled using very few random projections
and exactly reconstructed using a convex optimization that minimizes this mixed ℓ1/ℓ2 norm. The sampling
conditions we derive are very similar to the coherence and RIP conditions used in standard CS theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, a modification of the kq-flats framework for
pattern classification introduced in [9] is used for
pixelwise object detection. We include a preliminary discussion of
augmenting this method is with a Chan-Vese-like geometric
regularization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We derive lower and upper bounds for the distance between a frame and the set of equal-norm Parseval frames.
The lower bound results from variational inequalities. The upper bound is obtained with a technique that uses
a family of ordinary differential equations for Parseval frames which can be shown to converge to an equal-norm
Parseval frame, if the number of vectors in a frame and the dimension of the Hilbert space they span are relatively
prime, and if the initial frame consists of vectors having sufficiently nearly equal norms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-based medical diagnosis typically relies on the (poorly reproducible) subjective classification of textures
in order to differentiate between diseased and healthy pathology. Clinicians claim that significant benefits would
arise from quantitative measures to inform clinical decision making. The first step in generating such measures
is to extract local image descriptors - from noise corrupted and often spatially and temporally coarse resolution
medical signals - that are invariant to illumination, translation, scale and rotation of the features. The Dual-Tree Complex Wavelet Transform (DT-CWT) provides a wavelet multiresolution analysis (WMRA) tool e.g.
in 2D with good properties, but has limited rotational selectivity. Also, it requires computationally-intensive
steering due to the inherently 1D operations performed. The monogenic signal, which is defined in n >= 2D
with the Riesz transform gives excellent orientation information without the need for steering. Recent work has
suggested the Monogenic Riesz-Laplace wavelet transform as a possible tool for integrating these two concepts
into a coherent mathematical framework. We have found that the proposed construction suffers from a lack of
rotational invariance and is not optimal for retrieving local image descriptors. In this paper we show:
1. Local frequency and local phase from the monogenic signal are not equivalent, especially in the phase
congruency model of a "feature", and so they are not interchangeable for medical image applications.
2. The accuracy of local phase computation may be improved by estimating the denoising parameters while
maximizing a new measure of "featureness".
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection
easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing
Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform
which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant
variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus,
MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...),
and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present
article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests
are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method
based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-scale Variance Stabilization Transform (MSVST) has recently been proposed for 2D Poisson data
denoising.1 In this work, we present an extension of the MSVST with the wavelet transform to multivariate
data-each pixel is vector-valued-, where the vector field dimension may be the wavelength, the energy, or the
time. Such data can be viewed naively as 3D data where the third dimension may be time, wavelength or
energy (e.g. hyperspectral imaging). But this naive analysis using a 3D MSVST would be awkward as the data
dimensions have different physical meanings. A more appropriate approach would be to use a wavelet transform,
where the time or energy scale is not connected to the spatial scale. We show that our multivalued extension of
MSVST can be used advantageously for approximately Gaussianizing and stabilizing the variance of a sequence
of independent Poisson random vectors. This approach is shown to be fast and very well adapted to extremely
low-count situations. We use a hypothesis testing framework in the wavelet domain to denoise the Gaussianized
and stabilized coefficients, and then apply an iterative reconstruction algorithm to recover the estimated vector
field of intensities underlying the Poisson data. Our approach is illustrated for the detection and characterization
of astrophysical sources of high-energy gamma rays, using realistic simulated observations. We show that the
multivariate MSVST permits efficient estimation across the time/energy dimension and immediate recovery of
spectral properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During data acquisition, the loss of data is usual. It can be due to malfunctioning sensors of a CCD camera or
any other acquiring system, or because we can only observe a part of the system we want to analyze. This problem
has been addressed using diffusion through the use of partial differential equations in 2D and in 3D, and recently
using sparse representations in 2D in a process called inpainting which uses sparsity to get a solution (in the
masked/unknown part) which is statistically similar to the known data, in the sense of the transformations used,
so that one cannot tell the inpainted part from the real one. It can be applied on any kind of 3D data, whether
it is 3D spatial data, 2D and time (video) or 2D and wavelength (multi-spectral imaging). We present inpainting
results on 3D data using sparse representations. These representations may include the wavetet transforms, the
discrete cosine transform, and 3D curvelet transforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent development of multi-channel sensors has motivated interest in devising new methods for the
coherent processing of multivariate data. An extensive work has already been dedicated to multivariate
data processing ranging from blind source separation (BSS) to multi/hyper-spectral data restoration.
Previous work has emphasized on the fundamental role played by sparsity and morphological diversity
to enhance multichannel signal processing.
GMCA is a recent algorithm for multichannel data analysis which was used successfully in a variety of
applications including multichannel sparse decomposition, blind source separation (BSS), color image
restoration and inpainting. Inspired by GMCA, a recently introduced algorithm coined HypGMCA
is described for BSS applications in hyperspectral data processing. It assumes the collected data is a
linear instantaneous mixture of components exhibiting sparse spectral signatures as well as sparse spatial
morphologies, each in specified dictionaries of spectral and spatial waveforms. We report on numerical
experiments with synthetic data and application to real observations which demonstrate the validity of
the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Patch based methods give some of the best denoising results. Their theoretical performances are still unexplained
mathematically. We propose a novel insight of NL-Means based on an aggregation point of view. More precisely,
we describe the framework of PAC-Bayesian aggregation, show how it allows to derive some new patch based
methods and to characterize their theoretical performances, and present some numerical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed compressed sensing is the extension of compressed sampling (CS) to sensor networks. The idea is to
design a CS joint decoding scheme at a central decoder (base station) that exploits the inter-sensor correlations, in
order to recover the whole observations from very few number of random measurements per node. In this paper,
we focus on modeling the correlations and on the design and analysis of efficient joint recovery algorithms.
We show, by extending earlier results of Baron et al.,1 that a simple thresholding algorithm can exploit the
full diversity offered by all channels to identify a common sparse support using a near optimal number of
measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.