The sparsifying representation plays a significant role in compressive sensing (CS)-based hyperspectral (HS) imaging. Training the dictionaries for each dimension from HS samples is very beneficial to accurate reconstruction. However, the tensor dictionary learning algorithms are limited by a great amount of computation and convergence difficulties. We propose a least squares (LS) type multidimensional dictionary learning algorithm for CS-based HS imaging. We develop a practical method for the dictionary updating stage, which avoids the use of the Kronecker product and thus has lower computation complexity. To guarantee the convergence, we add a pruning stage to the algorithm to ensure the similarity and relativity among data in the spectral dimension. Our experimental results demonstrated that the dictionaries trained using the proposed algorithm performed better at CS-based HS image reconstruction than those trained with traditional LS-type dictionary learning algorithms and the commonly used analytical dictionaries.
KEYWORDS: Denoising, Hyperspectral imaging, Global system for mobile communications, Lawrencium, Performance modeling, Data modeling, Image filtering, Computer simulations, Signal to noise ratio, Image denoising
Many nuclear norm minimization (NNM)-based methods have been proposed for hyperspectral image (HSI) mixed denoising due to the low-rank (LR) characteristics of clean HSI. However, the NNM-based methods regularize each eigenvalue equally, which is unsuitable for the denoising problem, where each eigenvalue stands for special physical meaning and should be regularized differently. However, the NNM-based methods only exploit the high spectral correlation, while ignoring the local structure of HSI and resulting in spatial distortions. To address these problems, a total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method is proposed. To obtain the desired denoising performance, two issues are included. First, to exploit the high spectral correlation, the HSI is restricted to be LR, and different eigenvalues are minimized with different weights based on the WNNM. Second, to preserve the local structure of HSI, the TV regularization is incorporated, and the alternating direction method of multipliers is used to solve the resulting optimization problem. Both simulated and real data experiments demonstrate that the proposed TWNNM approach produces superior denoising results for the mixed noise case in comparison with several state-of-the-art denoising methods.
We propose a new approach for Kronecker compressive sensing of hyperspectral (HS) images, including the imaging mechanism and the corresponding reconstruction method. The proposed mechanism is able to compress the data of all dimensions when sampling, which can be achieved by three fully independent sampling devices. As a result, the mechanism greatly reduces the control points and memory requirement. In addition, we can also select the suitable sparsifying bases and generate the corresponding optimized sensing matrices or change the distribution of sampling ratio for each dimension independently according to different HS images. As the cooperation of the mechanism, we combine the sparsity model and low multilinear-rank model to develop a reconstruction method. Analysis shows that our reconstruction method has a lower computational complexity than the traditional methods based on sparsity model. Simulations verify that the HS images can be reconstructed successfully with very few measurements. In summary, the proposed approach can reduce the complexity and improve the practicability for HS image compressive sensing.
Single-image super-resolution (SR) is one of the most important and challenging issues in image processing. To produce a high-resolution image from a low-resolution image, one of the conventional approaches is to leverage regularization to overcome the limitations caused by the modeling. However, conventional regularizers such as total variation always neglect the high-level structures in the data. To overcome the drawback, we propose to explore the underlying information for the images with structured edges by using directional total variation. An alternating direction method of a multiplier-based algorithm is presented to effectively solve the resulting optimization problem. Computer simulations on several texture images such as a leaf image have been used to demonstrate the effectiveness and improvement of the proposed method on SR reconstruction, both qualitatively and quantitatively. Furthermore, the effect of parameter selection is also discussed for the proposed method.
For local smooth regions in multifocus images, it is difficult to judge whether they are in focus or not, whether using human eyes or special focus measures. We propose to classify the images into smooth and nonsmooth regions based on structural similarity index. Quaternion wavelet transform (QWT), as a novel tool of image analysis, has some superior properties compared to discrete wavelet transform, such as nearly shift-invariant wavelet coefficients and phase-based texture representation. We use the local variance of the QWT phases to detect the focus position for the pixels belonging to the nonsmooth image regions. Thus, binary images of the left-focus, right-focus, and smooth region, e.g., there are two different focuses, are obtained. Then, the connected components labeling algorithm is exploited to label the two binary images containing the focus position information, and the regions with focus measure errors are transferred between the two binary images. The fusion result is finally acquired through three weighted binary images combined with the original multifocus images. Furthermore, we conduct several experiments to verify the feasibility of the proposed fusion method. The performance is demonstrated to be superior to current methods.
Speckle reduction is a difficult task for ultrasound image processing because of low resolution and contrast. As a novel tool of image analysis, quaternion wavelet (QW) has some superior properties compared to discrete wavelets, such as nearly shift-invariant wavelet coefficients and phase-based texture presentation. We aim to exploit the excellent performance of speckle reduction in quaternion wavelet domain based on the soft thresholding method. First, we exploit the characteristics of magnitude and phases in quaternion wavelet transform (QWT) to the denoising application, and find that the QWT phases of the images are little influenced by the noises. Then we model the QWT magnitude using the Rayleigh distribution, and derive the thresholding criterion. Furthermore, we conduct several experiments on synthetic speckle images and real ultrasound images. The performance of the proposed speckle reduction algorithm, using QWT with soft thresholding, demonstrates superiority to those using discrete wavelet transform and classical algorithms.
An accuracy assessment method of infrared camera laboratory radiometric calibration was studied for the sake of
validation of space infrared camera measured data. Firstly, image process of infrared camera was analyzed and modeled
on laboratory radiometric calibration, a model of linear radiometric calibration coefficient synthesized impact chain was
built; secondly, based on the model, a model of uncertainty of linear radiometric calibration coefficient was built; finally,
An experiment verified the validity of the model. The results of experiment indicate that the difference between the
assessment value and maximum experiment value is 0.08% and the difference between the assessment value and average
of experiment values is 0.79% at gain uncertainty assessment, the difference between the assessment value and
maximum experiment value is 0.7%, the difference between the assessment value and average of experiment values is
0.18% at offset uncertainty assessment, the uncertainty of coefficients of radiometric calibration get from experiments is
basically consistent with the assessment. The method of accuracy assessment of radiometric calibration combined with
chain factors uncertainty can avoid omitting and repetition in uncertainty estimation; optimization of camera used for
quantification can be designed on the method.
A new computing method is proposed for the measurement of the thermalphysical parameters of specimens with
nonuniform temperature profile distributions. The calculation method is derived from the temperature dependence of the
thermal properties, and can be applied to the measurement of the longitudinal thermal expansion coefficient and
electrical resistivity. The cross section of the entire specimen is uniform at room temperature and the changes during the
experiments are ignored in this method. If the temperatures are measured at equal intervals, the specimen may be
considered as consisting of M equal segments and each of them is S long. The corresponding length and resistance of
these segments at temperature T0 may be of any value. If the changes in length and temperature distribution of the
specimen are measured, the temperature dependence of the thermal expansion coefficient can be worked out using this
method. If the resistance and temperature distribution of the specimen are measured, the electrical resistivity versus
temperature function of the specimen, which is corrected by thermal expansion, can be obtained as well. The validity of
the computing method of thermal expansion coefficient and electrical resistivity is verified through computer simulation.
The maximum 'measurement' error of electrical resistivity is 3.3%.
This work aims at the improvement of measurement accuracy of thermal conductivity and thermal diffusivity using a hot
disk thermal constants analyser. The hot disk technique is based on the transient heating of a double spiral plane
sandwiched between two pieces of investigated material. By researching the temperature change in the sensor surface, it
is possible to deduce both the thermal conductivity and thermal diffusivity of the surrounding material from one single
transient recording, provided the heating power and measuring time are appropriately chosen within the reasonable range
defined by the theory and experimental situation. Based on the engineering application requirement for precision and
efficacy, a new experimental method has been developed for high-accuracy measurement of thermal conductivity and
thermal diffusivity in different experimental conditions. The standardized material Pyroceram 9606, with a thermal
conductivity of 4.05 W/(mK), has been investigated and analyzed using the newly developed method. The measurement
results show that the precision 5% estimated for thermal conductivity and 4% for thermal diffusivity at or around room
temperature and under normal pressure, which indicate that the newly developed method has led to the high-accuracy
measurement of thermal conductivity and diffusivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.