PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9869, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content–based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo–collection management systems, web–scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image–retrieval technique. It’s the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel–based image representation to hash–value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine–tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data– dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state–of–the–art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facial expression identification is an important part of face recognition and closely related to emotion detection from
face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft
Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly
deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying
emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face
mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is
derived from the distances between points on the face mesh and selected reference points in a single frame. The feature
components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression.
The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process
and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure
following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale
database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect
device in facial expression identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color transfer changes the color contents of a target image by replacing the colors in the target image with colors from another source image. The target image will be repainted/recolored to exhibit the same ambience as the source image. Color transfer is applicable to a wide range of commercial image processing tools and products. While much outstanding research has been conducted on this subject, judging the performance of the recoloring process remains subjective to human evaluation. To obtain an objective quantitative assessment of recoloring algorithms’ performance, a new color transfer quality measure is proposed. In this paper, we will first establish the requirements that a good color transfer quality measure should meet. Then, according to these requirements, a new color transfer quality measure is proposed that focuses on measuring the content similarity between a target image and a resulting recolored output image, and the color similarity between the source image and the output image. To demonstrate the performance of the proposed measure, the subjective human perception Mean Opinion Score (MOS) values are used. The high correlation between MOS and the proposed measure demonstrate the measure’s performance and demonstrate that it exhibits high consistency with human perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Publisher’s Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiscale Retinex with Color Restoration (MSRCR) algorithm has an extensive usage in many different applications and performs well for a very large variety of natural images. However, processing in three spectral channels for at least three distinct scale constants makes the algorithm time consuming. In addition, optimal performance is not always obtained with default parameter settings, especially when images that have a dark subject with a very bright background are being processed. In this paper, to overcome the drawbacks of the Retinex algorithm, a wavelet transform (WT) based Retinex approach is proposed, in which a modified version of the MSRCR algorithm is applied to the approximation coefficients of the transformed image. The detail coefficients are also changed in order to provide sharpness besides contrast enhancement. The proposed algorithm has less computational complexity, and also provides more realistic rendition compared to MSRCR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a video stabilization method using space-time video completion for effective static and dynamic textures reconstruction instead of frames cropping. The proposed method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. We propose to use a set of descriptors that encapsulate the information of periodical motion of objects necessary to reconstruct missing/corrupted frames. The background is filled-in by extending spatial texture synthesis techniques using set of 3D patches. Experimental results demonstrate the effectiveness of the proposed method in the task of full-frame video stabilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound imagery has been widely used for medical diagnoses. Ultrasound scanning is safe and non-invasive, and hence used throughout pregnancy for monitoring growth. In the first trimester, an important measurement is that of the Gestation Sac (GS). The task of measuring the GS size from an ultrasound image is done manually by a Gynecologist. This paper presents a new approach to automatically segment a GS from a static B-mode image by exploiting its geometric features for early identification of miscarriage cases. To accurately locate the GS in the image, the proposed solution uses wavelet transform to suppress the speckle noise by eliminating the high-frequency sub-bands and prepare an enhanced image. This is followed by a segmentation step that isolates the GS through the several stages. First, the mean value is used as a threshold to binarise the image, followed by filtering unwanted objects based on their circularity, size and mean of greyscale. The mean value of each object is then used to further select candidate objects. A Region Growing technique is applied as a post-processing to finally identify the GS. We evaluated the effectiveness of the proposed solution by firstly comparing the automatic size measurements of the segmented GS against the manual measurements, and then integrating the proposed segmentation solution into a classification framework for identifying miscarriage cases and pregnancy of unknown viability (PUV). Both test results demonstrate that the proposed method is effective in segmentation the GS and classifying the outcomes with high level accuracy (sensitivity (miscarriage) of 100% and specificity (PUV) of 99.87%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mammalian skin is a complex organ composed of a variety of cells and tissue types. The automatic detection and quantification of changes in skin structures has a wide range of applications for biological research. To accurately segment and quantify nuclei, sebaceous gland, hair follicles, and other skin structures, there is a need for a reliable segmentation of different skin layers. This paper presents an efficient segmentation algorithm to segment the three main layers of mice skin, namely epidermis, dermis, and subcutaneous layers. It also segments the epidermis layer into two sub layers, basal and cornified layers. The proposed algorithm uses adaptive colour deconvolution technique on H&E stain images to separate different tissue structures, inter-modes and Otsu thresholding techniques were effectively combined to segment the layers. It then uses a set of morphological and logical operations on each layer to removing unwanted objects. A dataset of 7000 H&E microscopic images of mutant and wild type mice were used to evaluate the effectiveness of the algorithm. Experimental results examined by domain experts have confirmed the viability of the proposed algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The hippocampus is the region of the brain that is primarily associated with memory and spatial navigation. It is one of the first brain regions to be damaged when a person suffers from Alzheimer's disease. Recent research in this field has focussed on the assessment of damage to different blood vessels within the hippocampal region from a high throughput brain microscopic images. The ultimate aim of our research is the creation of an automatic system to count and classify different blood vessels such as capillaries, veins, and arteries in the hippocampus region. This work should provide biologists with efficient and accurate tools in their investigation of the causes of Alzheimer’s disease. Locating the boundary of the Region of Interest in the hippocampus from microscopic images of mice brain is the first essential stage towards developing such a system. This task benefits from the variation in colour channels and texture between the two sides of the hippocampus and the boundary region. Accordingly, the developed initial step of our research to locating the hippocampus edge uses a colour-based segmentation of the brain image followed by Hough transforms on the colour channel that isolate the hippocampus region. The output is then used to split the brain image into two sides of the detected section of the boundary: the inside region and the outside region. Experimental results on a sufficiently number of microscopic images demonstrate the effectiveness of the developed solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new image denoising method is proposed in this paper. We are considering an optimization problem with a linear objective function based on two criteria, namely, L2 norm and the first order square difference. This method is a parametric, so by a choice of the parameters we can adapt a proposed criteria of the objective function. The denoising algorithm consists of the following steps: 1) multiple denoising estimates are found on local areas of the image; 2) image edges are determined; 3) parameters of the method are fixed and denoised estimates of the local area are found; 4) local window is moved to the next position (local windows are overlapping) in order to produce the final estimate. A proper choice of parameters of the introduced method is discussed. A comparative analysis of a new denoising method with existed ones is performed on a set of test images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The illicit production of 3D printed keys based on remote-sensed imagery is problematic as it allows a would-be intruder to access a secured facility without the attack attempt being as obviously detectable as conventional techniques. This paper considers the problem from multiple perspectives. First, it looks at different attack types and considers the prospective attack from a digital information perspective. Second, based on this, techniques for securing keys are considered. Third, the design of keys is considered from the perspective of making them more difficult to duplicate using visible light sensing and 3D printing. Policy and legal considerations are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points’ color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all ‘changed’ voxels nor all ‘no changed’ voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses (a) the design and implementation of the integrated radio tomographic imaging (RTI) interface for radio signal strength (RSS) data obtained from a wireless imaging sensor network (WISN) (b) the use of model-driven methods to determine the extent of regularization to be applied to reconstruct images from the RSS data, and (c) preliminary study of the performance of the network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel concept of the quaternion discrete Fourier transform on the two-dimensional hexagonal lattice, which we call the two-dimensional hexagonal quaternion discrete Fourier transform (2-D HQDFT). The concept of the right-side 2D HQDFT is described and the left-side 2-D HQDFT is similarly considered. To calculate the transform, the image on the hexagonal lattice is described in the tensor representation when the image is presented by a set of 1-D signals, or splitting-signals which can be separately processed in the frequency domain. The 2-D HQDFT can be calculated by a set of 1-D quaternion discrete Fourier transforms (QDFT) of the splitting-signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work proposes a novel algorithm for contour detection based on high-performance algorithm of wavelet analysis for multimedia applications. To solve the noise effect on the result of peaking in this paper we consider the direct and inverse wavelet differentiation. Extensive experimental evaluation on noisy images demonstrates that our contour detection method significantly outperform competing algorithms. The proposed algorithm provides a means of coupling our system to recognition application such as detection and identification of vehicle number plate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some of old photographs are damaged due to improper archiving (e.g. affected by direct sunlight, humidity, insects, etc.) or have a physical damage resulting on appearance of cracks, scratches on photographs, non-necessary signs, spots, dust, and so on. This paper focuses on detection and removal of cracks from digital images. The proposed method consists of the following steps: pre-processing, crack detection and image reconstruction. A pre-processing step is used to suppress a noise and small defects in images. For a crack identification we use modified local binary patterns to form a feature vectors, and a non-linear SVM for a crack recognition. The combined inpainting method using structure and texture restoration is applied at the image reconstruction step. Image inpainting is the process of restoring the lost or damaged regions or modifying the image contents imperceptibly. This technique detects and removes the horizontal, vertical, diagonal cracks and other defects on complex scenes of image. We implemented proposed method on some mobile platforms for automatic image enhancement. Presented examples demonstrate the effectiveness of the algorithm in cracks detection and removal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim’s finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goals of this paper are (1) test the Computer Aided Classification of the prostate cancer histopathology images based on the Bag-of-Words (BoW) approach (2) evaluate the performance of the classification grade 3 and 4 of the proposed method using the results of the approach proposed by the authors Khurd et al. in [9] and (3) classify the different grades of cancer namely, grade 0, 3, 4, and 5 using the proposed approach. The system performance is assessed using 132 prostate cancer histopathology of different grades. The system performance of the SURF features are also analyzed by comparing the results with SIFT features using different cluster sizes. The results show 90.15% accuracy in detection of prostate cancer images using SURF features with 75 clusters for k-mean clustering. The results showed higher sensitivity for SURF based BoW classification compared to SIFT based BoW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel technique to mosaic multiview contactless finger images is presented. This technique makes use of different correlation methods, such as, the Alpha-trimmed correlation, Pearson’s correlation [1], Kendall’s correlation [2], and Spearman’s correlation [2], to combine multiple views of the finger. The key contributions of the algorithm are: 1) stitches images more accurately, 2) provides better image fusion effects, 3) has better visual effect on the overall image, and 4) is more reliable. The extensive computer simulations show that the proposed method produces better or comparable stitched images than several state-of-the-art methods, such as those presented by Feng Liu [3], K Choi [4], H Choi [5], and G Parziale [6]. In addition, we also compare various correlation techniques with the correlation method mentioned in [3] and analyze the output. In the future, this method can be extended to obtain a 3D model of the finger using multiple views of the finger, and help in generating scenic panoramic images and underwater 360-degree panoramas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a low level image descriptor called Histogram of Oriented Phase based on phase congruency concept and the Principal Component Analysis (PCA). Since the phase of the signal conveys more information regarding signal structure than the magnitude, the proposed descriptor can precisely identify and localize image features over the gradient based techniques, especially in the regions affected by illumination changes. The proposed features can be formed by extracting the phase congruency information for each pixel in the image with respect to its neighborhood. Histograms of the phase congruency values of the local regions in the image are computed with respect to its orientation. These histograms are concatenated to construct the Histogram of Oriented Phase (HOP) features. The dimensionality of HOP features is reduced using PCA algorithm to form HOP-PCA descriptor. The dimensionless quantity of the phase congruency leads the HOP-PCA descriptor to be more robust to the image scale variations as well as contrast and illumination changes. Several experiments were performed using INRIA and DaimlerChrysler datasets to evaluate the performance of the HOP-PCA descriptor. The experimental results show that the proposed descriptor has better detection performance and less error rates than a set of the state of the art feature extraction methodologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.