We determined the signal-to-noise ratios for an image fusion approach that is suitable for
application to systems with disparate sensors. We found there was reconstruction error present when the
forward and reverse transforms for image fusion give perfect reconstruction.
We used measures based on entropy to evaluate a method designed to fuse imagery from different
sensor types. The method uses different forward transforms of input images and a common transform to
reconstruct the final result. We attempted to examine the link between the error in a reconstructed result
and its associated entropy.
We used a method designed to fuse imagery from different sensor types. The method uses
different forward transforms of input images and a common transform to reconstruct the final result. When
measuring the average relative entropy of the fused results, we found that our method gave generally better
results when compared to a more conventional approach. We found that reconstruction filters with a small
number of zeros seemed to give the best performance.
We developed a method to fuse imagery for from different sensor types. The core of our method uses different forward transforms of input images and a common transform to reconstruct the final result. When measuring the entropy and power of the fused result, we found that our method gave generally better results when compared to a more conventional approach. Our method could form the basis of a new image fusion approach because it offers results not possible with a conventional approach.
We combined cross-sensor data that leads to improved extraction of information from disparate sensors. We presented a new method for signal fusion that uses different transforms for the forward transforms of two images and a common transform for the inverse. When using a fusion rule that selects the maximum value between images, we were able to transfer more energy to the result using our method. Our method could form the basis of a new image fusion approach because it offers a way to transfer more energy to the result not possible with a conventional approach.
We determined the overall disparity from stereo images using multiscale products of disparity values. Using the wavelet transform, we used a multiresolution analysis to calculate disparity values from stereo images at different scales. Forming the product of disparities and rescaling resulted in a disparity map that was directly related to depth. Using a multiresolution approach allowed us to more accurately determine disparity by examining the consistency of results through different scales. Using this approach could form the basis of an effective method for depth estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.