Open Access
5 January 2023 Investigating the impact of thresholding and thinning methods on the performance of partial fingerprint identification systems: a review
Mahesh Joshi, Bodhisatwa Mazumdar, Somnath Dey
Author Affiliations +
Abstract

A fingerprint identification system is an application of pattern recognition and image processing. The performance of a fingerprint-based biometric system relies on pre-processing techniques employed on fingerprint images. Especially, thresholding and thinning methods are used to detect minutiae points representing local features and are often utilized to identify a person uniquely. However, studies on partial fingerprints exposed the MasterPrint vulnerability for partial fingerprint identification systems wherein the system performs non-unique user identification. The thresholding and thinning techniques may lead to spurious minutiae generation and stimulate huge MasterPrints. Here, we investigate the impact of thresholding and thinning methods on identification accuracy and the percentage of MasterPrint generated using a partial fingerprint identification method. The experiments comprise four thresholding methods, namely, iterative optimal thresholding, Otsu’s global image thresholding, Niblack local thresholding, and Bernsen’s local image thresholding. Furthermore, it employs four thinning methods, namely, Khalid, Mariusz, Marek thinning algorithm, Khalid, Marek, Mariusz, Marcin thinning algorithm, Hilditch thinning algorithm, and Stentiford thinning algorithm. The results demonstrate that the identification accuracy and percentage of MasterPrint generated varies significantly by replacing the underlying pre-processing methods. Consequently, each combination of thresholding and thinning methods might not be suitable for user identification in high-security applications using a partial fingerprint identification method. The investigation outcomes provide the guidelines to demonstrate the robustness of a partial fingerprint identification method.

1.

Introduction

Fingerprint biometric systems (FBSs) require minimal involvement from an individual to capture the sensed finger impression. Also, these systems have high accuracy and are affordable in implementation. Hence, FBSs are widely used as commercial applications for access control and user identification. An FBS is employed for user authentication and identification applications. During the enrollment process, the system administrator enrols a legitimate person by storing multiple samples of their finger into the database as encrypted templates. In the authentication scenario, the system verifies the user’s claim by comparing the recently acquired fingerprint template with only those stored templates, which possess the claimed identity. The claim may get accepted or rejected based on the similarity score between the templates. However, an identification system fetches all the stored templates to compare with the recently created template. The identity of the stored template generating the highest similarity score becomes the unknown user’s identity.

In general, FBSs perform ridge pattern recognition utilizing the features extracted from the fingerprint image for access control, security applications, user authentication, and identification. Fingerprint of a user captured by the sensors at different occasions usually differs due to various physiological factors of an individual and environmental conditions. Hence, these images are pre-processed for better recognition accuracy. Thresholding and thinning are among the predominantly employed pre-processing methods in fingerprint recognition. Usually, fingerprint images are enhanced using the thresholding technique to ensure that samples of the same finger captured at multiple instances appear similar. Thresholding enhances the dark region from the fingerprint termed as ridges and reduces the intensity of the remaining portion called valleys. Minutiae are the local fingerprint features formed at the point where three ridges emerge, or a single ridge ends.1 However, as minutia represents a single pixel within a ridge, a binarized fingerprint image obtained by thresholding may not facilitate accurate minutiae detection in multiple samples of the same fingerprint.2 Therefore, a single-pixel version of fingerprint ridges is obtained using the thinning of the binarized image. Consequently, a combination of thresholding and thinning methods for minutiae-based feature extraction improves person recognition accuracy of an FBS. Ratha et al.3 employed projection-based thresholding approach3 and Human Information Processing Laboratory’s Image processing System library4 for thinning purpose. Feng5 performed thinning of fingerprints using local threshold-based thresholding method and Guo and Hall method6 was used for thinning the binarized fingerprints. Joshi et al.7 used adaptive thresholding approach8 and Zhang–Suen thinning algorithm9 for thresholding and thinning of partial fingerprints in their experimentation.

FBS are among the most reliable means of individual authentication, authorization, and identification due to low-cost devices, user convenience, low response time, and high accuracy. However, Roy et al.10 investigated the partial fingerprint identification systems and observed that partial fingerprints might not be unique to every individual. The authors termed MasterPrint to those partial fingerprints identifying at least 4% distinct subjects from the enrolled database. The authors experimented on partial fingerprints cropped from FVC 2002 full fingerprint dataset using commercial VeriFinger software development kit. The results of their investigation concluded that a dictionary of the top five MasterPrints could disclose the identity of more than 60% unique subjects. Bontrager et al.11 investigated the feasibility of using latent variable evolution to generate complete image-level synthetic MasterPrints, termed as DeepMasterPrints. The probability of a successful attack using a dictionary of DeepMasterPrints was demonstrated to be significantly high on NIST Special Database 9 fingerprint dataset12 and FingerPass DB7 dataset.13 These statistics proved the severity of the MasterPrint vulnerability.

Pre-processing fingerprint grayscale images is a crucial step in fingerprint recognition. Due to environmental and physiological conditions, the fingerprint images of the same finger captured at different instances are not similar. Hence, pre-processing in fingerprint recognition was less explored as it was a common practice in the literature to allow tolerance during the feature matching step. The investigation by Roy et al.10 on partial fingerprint identification disclosed the MasterPrint vulnerability. The investigators reported accepting approximately similar features as one of the reasons for generating MasterPrint.10 The first method to address the MasterPrint vulnerability by opting for strict feature matching was presented by Joshi et al.7 The method performed feature extraction based on geometric constructs formed using adjacent minutiae. Hence, the method was expected to produce marginally varying identification rate and percentage of MasterPrint generated if the underlying pre-processing approaches were replaced with other thresholding and thinning algorithms in the literature. The work in this paper was carried out to investigate how far the identification accuracy and percentage of MasterPrint generated varies by replacing the pre-processing steps with other thresholding and thinning methods in the literature.

Joshi et al.7 proposed a minutiae geometry-based MasterPrint mitigation method, and employed adaptive thresholding approach8 for thresholding of the partial fingerprints and Zhang–Suen algorithm9 to obtain thinned fingerprints. The approach achieved up to 97% accuracy and generated 0.1% MasterPrints. However, as the image processing literature contains numerous techniques for thresholding and thinning, it is imperative to investigate the robustness of these methods in terms of identification accuracy and addressing the MasterPrint vulnerability for a partial fingerprint identification system. In this regard, the paper provides an exhaustive experimentation comprising four thresholding and four thinning methods to study their impact on the partial fingerprint identification system performance.

The highlight of the investigations from the paper is to answer the following questions,

  • 1. Given a rotation invariant, robust, minutiae-based local feature extraction method, how far do the results vary if the fingerprint pre-processing is carried out using diverse thresholding and thinning methods?

  • 2. What is the more acceptable measure for a fingerprint recognition method to signify appropriate pre-processing, the average count of minutiae in the fingerprint or average minutiae density?

  • 3. Do false minutiae detection and removal methods help improve the fingerprint identification system performance if the underlying pre-processing does not yield the thinned fingerprint image that preserves ridge connectivity and ridge pattern?

  • 4. Are the existing image pre-processing methods beneficial for fingerprint images, or is there a requirement for a rigorously tested fingerprint pre-processing method?

The organization of the rest of the paper is as follows. Section 2 describes an introduction of various components involved in FBS. Section 3 introduces various thresholding approaches employed in the experimentation. Section 4 provides a brief introduction to the thinning approaches that follow the thresholding process in the investigation. The experimental setup to investigate the impact of various pre-processing methods on the performance of a partial fingerprint identification and MasterPrint mitigation scheme is presented under Sec. 5. The result analysis and performance evaluation of the possible combinations of thresholding and thinning approaches is given in Sec. 6. The discussion on the results follows in Sec. 7. Finally, Sec. 8 concludes the paper.

2.

Fingerprint Biometric System: An Overview

An FBS comprises several components performing a dedicated function in fingerprint recognition. The steps involved in minutiae-based FBS are shown in Fig. 1. A sensor on an input device captures the fingerprint for the portion of a finger that touches the sensor. The area of a finger that touches the sensing device forms a dark region termed as ridges. However, the remaining portion of the finger results in valleys. Thus, fingerprint acquisition involves creating a ridge-valley pattern for an input finger. However, distortions due to varying pressure exerted, environmental factors, sweat, or dry fingers are certain when mapping a three-dimensional fingertip onto a two-dimensional plane.14 Hence, fingerprint pre-processing is carried out to facilitate feature extraction. Fingerprint pre-processing usually involves enhancing the quality of the fingerprint image, i.e., producing high contrast between ridges and valleys, to facilitate minutiae-based local feature extraction.15

Fig. 1

Various components involved in minutiae feature-based FBS.

JEI_32_1_010901_f001.png

Image enhancement techniques modify the intensities of pixels in an image so that it can be more suitable for a specific application. An enhancement method appropriate for one application may not deliver the expected outcome for another. Fingerprint image enhancement approaches can be categorized as histogram-based, filtering-based, or transformation-based. Histogram specification is an image processing approach that utilizes image histograms to adjust contrast.16 The histogram depicts the brightness distribution and is primarily used to enhance the local contrast without affecting the overall contrast. Image enhancement using histogram equalization (HE) represents the statistical relationship between each gray level in the image and the number of pixels that appear in the gray level. HE reflects the frequency of each gray level in the image. However, it is often necessary to consider combining a variety of simple and effective algorithms or fusing other enhancement techniques to achieve the final enhancement effect.17 Furthermore, HE techniques add noise to the output image and increase background contrast.18,19

Usually, filter-based methods generate image frequency spectrum data. Filters are utilized in pre-processing to solve two objectives. First, to fill the small gaps, i.e., low-pass effect, in the ridge direction. Next, increase the band-pass effect, i.e., discrimination between ridges and valleys in the direction orthogonal to the ridges.20 Gabor filters act as band-pass filters to remove the noise and preserve the true ridge-valley pattern. The convolution nature of the Gabor filter contributes to high computational complexity, leading to an overall increase in the running time of the user verification and identification process.14 Furthermore, applying Gabor filters requires feeding reliable estimation inputs of the local context, i.e., the local orientation and ridge frequency to Gabor filters. Moreover, failing to estimate the local context correctly may lead to the creation of artefacts in the output image, consequently increasing the number of errors in user identification or verification.14 The major drawbacks of Gabor filters include their limited, i.e., approximately one octave, maximum bandwidth, and when seeking broad spectral information with maximal spatial localization, they are not optimal.14 Frequency domain techniques are computationally less efficient and require more processing resources to implement.14 Thresholding is a type of transformation-based image enhancement method. Thresholding generally involves two steps, i.e., determining a gray threshold according to some objective criteria and assigning each pixel to one class of background or foreground. The objective criteria may consider neighboring pixel intensities or the intensities from the entire image. Consequently, a thresholding method can be a local or global thresholding approach. The thresholding of a digital image is beneficial for segmenting a region of interest from the background. When applied to fingerprint images, thresholding solves two purposes. It isolates a fingerprint from the rest of the untouched area of the sensing device and highlights ridge patterns. Usually, thresholding requires fewer computations and, therefore, is relatively easy to implement compared to other techniques.21

The fingerprint recognition involving minutiae-based features employs direct grayscale, binarized, and thinned images.22 Since minutiae are single-pixel locations in the fingerprint image, a thinned fingerprint image facilitates minutiae detection. Hence, the fingerprint image after thresholding is often thinned before minutiae detection. The crossing number (CN) is the most widely used and accepted measure for minutiae detection.23 The fingerprint template of a user comprises encrypted minutiae-based features. The templates for each user are stored in the template database during the enrollment phase. During the verification and identification of a user, the stored templates are retrieved and compared against the recently created fingerprint template. The templates corresponding to the identity claimed by a user are fetched from the database during the verification process. However, the newly acquired fingerprint template is compared with each stored template in the identification process. The system generates a similarity score, a numeric value, as the measure showing the chances that the templates under comparison belong to the same user. Thus, a high score corresponds to a higher probability that the templates are generated from the same finger. The user’s claim is verified during verification as accepted or rejected. However, an unknown user’s identity is declared in the identification scenario.

3.

Thresholding Methods

Image pre-processing techniques improve the quality of an image while preserving its original contents.24 However, it is not always useful to enhance an image as, more often, crucial information may get lost due to such techniques. There is an infinitesimally small probability of acquiring exactly the same fingerprints by the biometric system each time a user touches the biometric sensor. The weather conditions such as, moisture or heat, sweating around the fingertip, the orientation of the finger, and pressure exerted on the sensor surface are some of the reasons leading to dissimilar fingerprint samples of the same finger acquired at different instances. Therefore, thresholding approaches are generally employed to produce approximately similar ridge patterns from several samples of the same finger. These techniques are broadly categorized as global, local, and hybrid methods.25 Figure 2 shows a grayscale image and the transformed images after applying threshold segmentation, low threshold, and high threshold. The figure shows that the threshold segmented version is the most appropriate for object recognition as it has retained the shapes and allowed negligible noise during the transformation. The fingerprint biometric researchers and vendors use image enhancement techniques for research activities and consumer products. However, every method may not be beneficial for a biometric application and may adversely affect the system accuracy by removing true minutiae or inserting false minutiae. Consequently, such practices leads to an incorrect and low identification rates and high MasterPrint generation. Therefore, it is imperative to investigate the impact of various combinations of pre-processing techniques and examine the system performance.

Fig. 2

Image thresholding (a) original image; (b) threshold segmentation; (c) low threshold; and (d) high threshold.24

JEI_32_1_010901_f002.png

Shaikh et al.26 evaluated six thresholding methods for performance bench-marking of various global and local thresholding methods toward fingerprint-based biometric recognition system. Their work forms the basis for selecting the thresholding approaches employed in our experiments. The thresholding algorithms used during the experiments include iterative optimal thresholding,24 Otsu’s global image thresholding,27 Niblack local thresholding,28 and Bernsen local image thresholding.29 The following subsection briefly discusses these methods.

3.1.

Iterative Optimal Thresholding

The iterative optimal thresholding approach models the image pixels as a histogram generating normal distributions for the area of interest, i.e., the ridge portion and the background region, also known as the valley portion.30 The approach considers the minimum probability lying between the two distributions’ maxima as the initial threshold, Ti. Furthermore, the method iteratively updates Ti to minimize the segmentation error.24 The algorithm considers the value of Ti for which the segmentation error cannot be further minimized as the optimal threshold, To. Figure 3 shows the global level histograms to decide the initial optimal threshold. The figure depicts gray-level histograms approximated by two normal distributions—probability distributions of background and object of interest. The final optimal threshold is set to give minimum probability of segmentation error.24

Fig. 3

Gray-level histograms approximated by two normal distributions—probability distributions of background and object of interest. The threshold is set to give minimum probability of segmentation error.24

JEI_32_1_010901_f003.png

3.2.

Otsu’s Global Image Thresholding

The approach returns an intensity as a threshold to divide the image pixels as background and foreground. The algorithm iteratively tries to maximize the inter-class intensity variance or minimise the intra-class intensity variance.27 It computes histogram, H(i), with L bins and probability, p(i), for each intensity, i, within the image. Let t be the threshold under consideration. The probability of a pixel to be a background, Wb, and foreground, Wf, is computed as below,

Eq. (1)

Wb(t)=i=0t1p(i),

Eq. (2)

Wf(t)=i=tL1p(i).

The approach then calculates within-class variance σw as

Eq. (3)

σw2(t)=Wb(t)×σb(t)+Wf(t)×σf(t),
where σb(t) and σf(t) are the background and foreground gray level variances, respectively. It returns the threshold corresponding to min(σw2(t)) as the desired threshold. Figure 4 shows a grayscale character A and its transformed version after applying Otsu’s global image thresholding method. The difference in the foreground and background in noticeable after thresholding.

Fig. 4

Image thresholding using Otsu’s global image thresholding method.27

JEI_32_1_010901_f004.png

3.3.

Niblack Local Thresholding

Niblack algorithm is a local thresholding approach.28 It uses a fixed-sized rectangular window, w, surrounding a reference pixel, p, and slides the window over the entire image, I. The window size is application dependent and default value is 15. The approach computes the local mean, μw, and standard deviation, σw, for the window region.31 The following equation decides the local threshold, Tw, for the given window, w

Eq. (4)

Tw=μw+(0.2)×σw.

The experimental results show that the approach generates thresholding noise in the non-desired gray region.32 Figure 5 shows a test image, the ground truth, i.e., the reference image from the HDIBCO 2016 dataset, and the image after applying Niblack’s thresholding. The resulted image after thresholding shows heavy background noise in the non-text, i.e., shadow, region.

Fig. 5

Image thresholding using Niblack’s local thresholding method: (a) test image; (b) ground truth; and (c) after applying Niblack thresholding method.33

JEI_32_1_010901_f005.png

3.4.

Bernsen’s Local Image Thresholding

Bernsen’s approach is another local thresholding method.29 For a given image, I, the approach initializes local contrast, l, and neighborhood window size, w, e.g., l=15 and w=3. The algorithm then assigns the lowest and highest gray levels within the window size w×w as Imin and Imax, respectively. The local threshold, Thl, and the contrast measure, Cm, are computed using the following equations 29

Eq. (5)

Thl=Imax+Imin2,

Eq. (6)

Cm=ImaxImin.

If Cm>l, i.e., a non-uniform grayscale image, then the neighborhood belongs to the same class (background or foreground). Otherwise, a global thresholding approach decides the local threshold.34 Figure 6 shows a sample text image before and after applying Bernsen’s thresholding method. The difference between the background and the foreground is evident after thresholding. A sample grayscale fingerprint image and its various versions after applying the thresholding methods employed in the investigations are shown in Fig. 7. The figures show that different thresholds were applied by each method on the original image. Furthermore, the ridge patterns in the image generated by Bernsen’s method are clearly distinguishable compared to other methods.

Fig. 6

Image thresholding using Bernsen’s local thresholding method: (a) original image and (b) image after thresholding with l=15 and w=3.35

JEI_32_1_010901_f006.png

Fig. 7

A grayscale fingerprint image and its various versions after applying thresholding approaches employed in the investigation.

JEI_32_1_010901_f007.png

4.

Thinning Approaches

A thinning algorithm produces a single-pixel skeletal structure that highlights prominent features from the original image. In general, a binarized image is employed for the thinning process to ensure connectivity among various regions within the image. It helps in determining the topological and metric-based properties to count, measure, and classify relevant features. However, local noises in the image easily affect the resultant skeleton.36 Thinning algorithms are mainly utilized for object representation, detection, manipulation, comparison, tracking, recognition, and compression.

Minutiae are the most widely exercised and accepted features utilized in FBSs.11 In general, minutiae points are stored as their (x,y) coordinates, orientation angle, and type, i.e., ridge or bifurcation. Minutiae-based FBSs employ minutiae correlations within an image during their comparison. Therefore, locating minutiae most accurately within two samples of the same finger is highly desirable. A robust thinning approach accepting a correctly binarized fingerprint image can improve system performance in such circumstances. However, a given thinning approach may also adversely reduce the recognition accuracy if it produces substantial false minutiae due to several breaks in the ridge patterns.

Nazarkevych et al.37 evaluated the effectiveness of image thinning methods in biometric security systems. The authors analyzed Zhang and Suen9 and Hilditch thinning algorithm,38 among others. Saha et al.39 presented a comprehensive review of existing thinning methods and their applications. The authors discussed thinning approaches applicable to fingerprint analysis. The work by Nazarkevych et al.37 and Saha et al.39 encouraged us to experiment with Hilditch thinning algorithm38 and Stentiford thinning method.40 The designers of Saeed et al.41 algorithm and its modified version by Tabedzki et al.42 have claimed these approaches as a universal algorithm for image thinning. Hence, these two methods were utilized to verify the claim for their robustness in partial fingerprint identification. The following subsections briefly explain the thinning methods used in the investigation.

4.1.

KMM Thinning Algorithm

The Khalid, Mariusz, Marek (KMM) approach accepts a binarized image wherein binary 1 represents the dark region to be thinned. Next, it converts the 1’s adjacent to the boundary 0’s and in the open elbow bends into 2 and 3, respectively.43 The method considers non-zero positions in the image and figures out the locations, x, having 2, 3, or 4 sticking neighbors. It changes all such x to 4. A predefined table, Deletion Array, provides the sum for x that is the probable target for removal. It iteratively eliminates such x, assuring that the connectivity is intact. Finally, the approach excludes unnecessary 2’s and 3’s until it produces a single-pixel width thinned image.44 Figure 8 shows various steps involved in thinning a binarized image using KMM thinning algorithm. The resultant thinned image preserves the pixel connectivity from the input image.

Fig. 8

Step involved in KMM thinning algorithm.43

JEI_32_1_010901_f008.png

4.2.

K3M Thinning Algorithm

The Khalid, Marek, Mariusz, Marcin (K3M) algorithm is a modified version of KMM.41 The algorithm iterates over seven phases until it generates a thinned image. These phases can be summarised as below,

  • 1. Mark boundry pixels, b

  • 2. Remove b’s with 3 adjacent neighbors

  • 3. Remove b’s having 3 or 4 adjacent neighbors

  • 4. Remove b’s with 3, 4, or 5 adjacent neighbors

  • 5. Remove b’s with 3, 4, 5, or 6 adjacent neighbors

  • 6. Remove b’s with 3, 4, 5, 6, or 7 adjacent neighbors

  • 7. Unmark remaining boundary pixels

If the current iteration of these seven phases modifies the image, the image undergoes another iteration. Otherwise, the algorithm stops resulting in a thinned image.42 Figure 9 shows several graphical symbols and their resultant thinned version after applying K3M thinning algorithm. The algorithm does not produce properly thinned images for small and almost thinned images.

Fig. 9

Thinning graphical symbols with K3M: (a) original shapes and (b) corresponding thinned shapes.41

JEI_32_1_010901_f009.png

4.3.

Hilditch Thinning Algorithm

The Hilditch thinning algorithm has two variants; one uses a 3×3 window while the other uses 4×4 window.38 The experiments in this paper employed 3×3 window. The neighborhood pixel nomenclature is as shown in Fig. 10. The algorithm iteratively decides if the reference pixel P1 should be removed based on the following five conditions,45

  • 1. Eliminate P1, if it is a part of the skeleton.

  • 2. Preserve P1, if it lies on the border of a skeleton.

  • 3. Preserve P1, if it is an isolated pixel.

  • 4. If P1 is a connecting pixel, preserve it.

  • 5. Remove P1, if it has at least one neighbor.

Fig. 10

Neighborhood pixel nomenclature in Hilditch approach.

JEI_32_1_010901_f010.png

The algorithm considers all the above conditions to decide if P1 should be preserved or eliminated. It finally stops when the recent iteration encounters no pixels for removal. Figure 11 depicts a sample fingerprint image and its thinned version. The top right corner portion of the thinned image shows an improper thinning operation as a single ridge is split into several tiny ridges.

Fig. 11

Fingerprint thinning : (a) original image and (b) corresponding thinned image.46

JEI_32_1_010901_f011.png

4.4.

Stentiford Thinning Algorithm

The templates used in the Stentiford algorithm to decide if a pixel should be removed are shown in Fig. 12. It considers only three locations, marked with the circle, in the neighborhood of a pixel. The algorithm steps are as below,40

  • 1. Traverse the image left-to-right downwards to locate pixels having T1 pattern,

  • 2. If the central pixel at such place is not an end point, i.e., the last pixel, and have connectivity value47 as 1, mark it for removal,

  • 3. Repeat step 1 and 2 for each pixel over the image,

  • 4. Repeat steps 1, 2 and 3 for T2 traversing upwards left-to-right, for T3 traversing right-to-left upwards, and for T4 traversing downwards right-to-left.

  • 5. Remove all marked pixels.

  • 6. If there was any deletion in step 5 of current iteration, repeat steps 1 to 5 else stop.

Fig. 12

The templates for deciding pixels for removal in Stentiford thinning algorithm. It considers only three locations, marked with circle, for the conclusion.

JEI_32_1_010901_f012.png

Figure 13 shows a grayscale MRI image and corresponding thinned image obtained using Stentiford thinning algorithm. The thinned image accurately reflects the features from original image.48

Fig. 13

Thinning of MRI image using Stentiford thinning algorithm: (a) original MRI image and (b) corresponding thinned image.48

JEI_32_1_010901_f013.png

5.

Experimental Setup

Recently, Joshi et al.7 proposed a minutiae geometry-based partial fingerprint identification approach targeted toward MasterPrint mitigation. The method experimented on partial fingerprint datasets cropped from five benchmark full fingerprint datasets delivered up to 97% accuracy and generated 0.1% MasterPrints. The details of various datasets employed during the investigations are shown in Table 1. The authors employed adaptive thresholding approach8 for thresholding and Zhang–Suen thinning algorithm9 for thinning the binarized partial fingerprints. Minutiae detection was carried out by employing the metric, CN.49 However, inappropriately binarized or thinned fingerprint may generate false minutiae affecting the system performance. Hence, Kim et al.50 algorithm was employed to detect and remove false minutiae. The algorithm performs post-processing on the detected minutiae using various parameters, such as ridge flow, ridge orientation, connectivity, and distance between minutiae. It detects and eliminates five different types of false minutiae, namely broken ridge, bridge, short ridge, hole, and triangle.50

The investigation in this paper aims to study the impact of various combinations of thresholding and thinning methods in Joshi et al.7 method toward identification accuracy and percentage of MasterPrint generated. The original papers that introduced the four thresholding and thinning methods employed during the investigation have shown satisfactory results. However, the impact of their cross combinations has not yet been reported. This work does not attempt to comment on a particular method or ascertain that a specific pair is preferable. Instead, this work evaluates the robustness of the Joshi et al.7 method under diverse pre-processing conditions. The experiments were carried out using Joshi et al.7 method on the partial datasets used in their paper. However, instead of adaptive thresholding approach8 for thresholding and Zhang–Suen thinning algorithm9 for thinning, 16 combinations of the selected thresholding and thinning methods were applied. So, there are 80 individual experiments as a part of the investigation.

The experiments were conducted on a desktop system with 64-bit Ubuntu 20.04.2 LTS (Focal Fossa) operating system having 64 GB internal memory (RAM) and Intel® Xeon(R) CPU E5-1620 v3 @ 3.50 GHz ×8 processor. The terminology used for various combinations of thresholding and thinning approaches on different datasets is given in Table 2. We followed D_B_T format for each combination, where D refers to the dataset, B specifies a thresholding approach, and T denotes the thinning approach. For example, entry 1_1_1 refers to the combination of iterative optimal thresholding and KMM thinning algorithm experimented on CrossMatch Sample DB dataset. Figure 14 shows a sample image from CrossMatch Sample DB dataset and its thinned version from each combination of thresholding and thinning approach as mentioned in Table 2. The thinned image for the same fingerprint generated from different combinations appears significantly diverse. This variation is expected to generate considerable differences in the identification performance of Joshi et al.7 method.

Table 1

Summary of the cropped partial datasets (150×150  px) used in the experiments.

DatasetFVC2002 DB1_AFVC2002 DB2_ACrossMatchsd302bsd302d
Sensor technologyOptical sensorOptical sensorOptical sensorTouch-freeTouch-free
Full fingerprints8008004089201600
Total subjects1001005192200
Samples per subjects888108
Image resolution500 dpi569 dpi500 ppi1000 ppi500 ppi
Partial dataset size35492767213420982960

Table 2

Nomenclature for various combinations of thresholding and thinning approaches on different datasets. (TN1 – KMM thinning algorithm, TN2 – K3M thinning approach, TN3 – Hilditch thinning algorithm, TN4 – Stentiford thinning algorithm).

Thresholding approachThinning approachCrossMatch Sample DBFVC2002 DB1_AFVC2002 DB2_ANIST sd302bNIST sd302d
Iterative optimal thresholdingTN11_1_12_1_13_1_14_1_15_1_1
TN21_1_22_1_23_1_24_1_25_1_2
TN31_1_32_1_33_1_34_1_35_1_3
TN41_1_42_1_43_1_44_1_45_1_4
Otsu’s methodTN11_2_12_2_13_2_14_2_15_2_1
TN21_2_22_2_23_2_24_2_25_2_2
TN31_2_32_2_33_2_34_2_35_2_3
TN41_2_42_2_43_2_44_2_45_2_4
Niblack local thresholdingTN11_3_12_3_13_3_14_3_15_3_1
TN21_3_22_3_23_3_24_3_25_3_2
TN31_3_32_3_33_3_34_3_35_3_3
TN41_3_42_3_43_3_44_3_45_3_4
Bernsen’s local image thresholdingTN11_4_12_4_13_4_14_4_15_4_1
TN21_4_22_4_23_4_24_4_25_4_2
TN31_4_32_4_33_4_34_4_35_4_3
TN41_4_42_4_43_4_44_4_45_4_4

Fig. 14

A sample fingerprint image (a) from the CrossMatch Sample DB dataset and corresponding thinned images, (a1–a16), generated from various combinations of thresholding and thinning approaches.

JEI_32_1_010901_f014.png

6.

Evaluation Metrics and Result Analysis

The MasterPrint vulnerability is a threat to an identification system. Hence, the investigation in this paper followed both closed-set and open-set identification set-up. The investigation involved two tests: an identification test51 and a zero MasterPrint detection test. During the identification test, each template was compared with every other template from the dataset, and the similarity score for each comparison was computed. If the highest score corresponds to the actual subject sample, it was quoted as a correct detect and identify (CDI). In a false alarm (FA) scenario, the highest score belongs to some other subjects’ templates. The system may reject a partial fingerprint due to no similarity with any stored templates. Suppose the system is enrolled with P partial fingerprints, and C, F, and R denotes the count of CDI, FA, and rejected partial fingerprints, respectively. Detect and identification rate (DIR), δ, FA rate (FAR), Ϝ, and rejection rate (RR), ϒ, are computed as: δ=CP×100, Ϝ=FP×100, and ϒ=RP×100.

An identification system producing lowest MasterPrints at higher DIR and lower FAR would become ideal for practical use in the FBS. The identification test results on each dataset involve computing the DIR, FAR, RR, and the percentage of MasterPrints generated without setting a predefined threshold. A cumulative matching characteristic (CMC) curve shows the rank-k performance of an identification system, depicting the identification of the correct subject at different ranks.52 The results from the identification test make up the data for the CMC curve. Suppose we have k subjects enrolled with a system. Ideally, the rank-k identification rate should be 100%. The best approach is expected to reach 100% performance at the earliest. Hence, the CMC plots presented here show the DIR performance till rank-10.

For each combination that produced MasterPrints in the identification test, a zero MasterPrint detection test was conducted. In this test, the system threshold was raised gradually until no MasterPrints are observed. Suppose, τ is the threshold at which no MasterPrints were observed. The DIR, δ0, at τ is calculated using the formula for δ. A good approach should show marginal variation between δ and δ0. Subsequently, δ0 is divided into three intermediate thresholds to compute DIR and FAR at each of these thresholds. The DIR and FAR at δ, δ0, and the three intermediate thresholds provides the data to plot the watchlist receiver operating characteristic (ROC) curve for each dataset. The curve occupying top-left region in the watchlist ROC plot is considered robust as it shows slight variation in DIR and significant reduction in FAR as the system threshold is increased to accept highly similar partial fingerprints.

6.1.

Identification and Zero MasterPrint Detection Test Results

The partial fingerprint identification method with different combinations during the pre-processing stage was evaluated for σ, Ϝ, σ0, ϒ, and the percentage of MasterPrints generated. The results of each combination of thresholding and thinning approach for the identification and zero MasterPrint detection test on CrossMatch Sample DB dataset is presented in Table 3. The highest DIR observed here was 92.65% by 1_1_3 while producing nearly 10% MasterPrints. The combination 1_2_3 generated more than 21% MasterPrints. The results of each combination of thresholding and thinning approach for the identification and zero MasterPrint detection test on FVC2002 DB1_A dataset is given in Table 4. Only two combinations, namely, 2_4_2 and 2_4_3 could achieve more than 90% DIR. However, 2_4_2 generated the maximum percentage of MasterPrints during the experimentation. The results of each combination of thresholding and thinning approach for the identification and zero MasterPrint detection test on FVC2002 DB2_A dataset is given in Table 5. The combination 3_4_1 delivered more than 93% DIR but produced above 16% MasterPrints. The lowest percentage of MasterPrints during the experiments was around 8% by the combination 3_1_4. The results of each combination of thresholding and thinning approach for the identification and zero MasterPrint detection test on NIST sd302b dataset is given in Table 6. Here, three combinations, namely, 4_1_1, 4_2_1, and 4_4_2 achieved above 90% DIR. However, 4_4_2 generated above 18% MasterPrints and so was ineffective toward mitigating the MasterPrint vulnerability. The results of each combination of thresholding and thinning approach for the identification and zero MasterPrint detection test on NIST sd302d dataset is given in Table 7. The highest DIR observed here was 92.38% by 5_1_2 while producing 16.81% MasterPrints. The combination 5_4_4 generated more than 23% MasterPrints.

Table 3

Results on CrossMatch Sample DB dataset. σ0 denotes DIR in zero MasterPrint generation test and MP represents the percentage of MasterPrints generated in identification test.

Binding approachσ (%)Ϝ (%)σ0 (%)MP (%)ϒ (%)
1_1_184.86.662.214.88.6
1_1_277.97.761.87.9414.4
1_1_392.653.875.659.653.55
1_1_490.123.8678.49.126.02
1_2_180.398.2662.328.3911.35
1_2_281.155.3460.8415.1513.51
1_2_391.525.773.2921.122.78
1_2_491.892.4575.6212.895.66
1_3_179.957.9560.49.9512.1
1_3_270.299.4155.216.2920.3
1_3_384.316.3462.878.319.35
1_3_480.884.4665.488.8914.66
1_4_187.254.5560.9817.258.2
1_4_268.87.1255.4318.8724.08
1_4_389.692.6172.649.697.7
1_4_492.163.0674.79.164.78

Table 4

Results on FVC2002 DB1_A dataset. σ0 denotes DIR in zero MasterPrint generation test and MP represents the percentage of MasterPrints generated in identification test.

Binding approachσ (%)Ϝ (%)σ0 (%)MP (%)ϒ (%)
2_1_185.24.1868.78.210.62
2_1_282.066.8469.5712.0611.1
2_1_379.352.9960.587.3517.66
2_1_485.885.4862.7415.888.64
2_2_169.612.2955.4917.6128.1
2_2_279.855.2762.7419.8514.88
2_2_375.883.4860.815.8820.64
2_2_477.118.8765.7117.1114.02
2_3_168.053.2755.966.0528.68
2_3_289.714.5475.219.715.75
2_3_378.698.9365.845.6912.38
2_3_489.127.1362.0917.123.75
2_4_182.756.8570.7512.7510.4
2_4_291.136.1473.2521.132.73
2_4_390.314.5478.0912.375.15
2_4_479.844.1664.867.8416

Table 5

Results on FVC2002 DB2_A dataset. σ0 denotes DIR in zero MasterPrint generation test and MP represents the percentage of MasterPrints generated in identification test.

Binding approachσ (%)ϝ (%)σ0 (%)MP (%)ϒ (%)
3_1_181.264.0675.219.2614.68
3_1_281.87.4178.9510.3110.79
3_1_385.297.21748.297.5
3_1_485.057.2862.548.057.67
3_2_188.532.7360.2711.538.74
3_2_267.335.2251.0612.527.45
3_2_385.057.2971.9815.057.66
3_2_484.316.5362.714.319.16
3_3_192.163.280.2912.164.64
3_3_291.795.5476.3217.792.67
3_3_370.517.1862.9819.5122.31
3_3_479.268.9559.219.2611.79
3_4_193.323.9480.916.322.75
3_4_279.517.1960.7719.5113.3
3_4_391.674.817411.673.52
3_4_489.36.0365.749.34.67

Table 6

Results on NIST SD sd302b dataset. σ0 denotes DIR in zero MasterPrint generation test and MP represents the percentage of MasterPrints generated in identification test.

Binding approachσ (%)ϝ (%)σ0 (%)MP (%)ϒ (%)
4_1_192.853.3774.654.853.78
4_1_285.753.6560.99.7510.6
4_1_365.24.0752.4915.230.73
4_1_462.012.9449.656.0135.05
4_2_191.15.1179.589.13.79
4_2_281.756.6167.2513.7511.64
4_2_362.011.8350.212.0136.16
4_2_462.253.0852.3612.2534.67
4_3_174.577.4156.811.5718.02
4_3_274.028.5365.2122.0217.45
4_3_384.86.8872.958.88.32
4_3_488.976.3376.3218.974.7
4_4_181.528.9774.6815.539.51
4_4_290.286.2978.9818.283.43
4_4_378.196.8362.417.1914.98
4_4_483.336.2660.3513.3910.41

Table 7

Results on NIST SD302d dataset. σ0 denotes DIR in zero MasterPrint generation test and MP represents the percentage of MasterPrints generated in identification test.

Binding approachσ (%)ϝ (%)σ0 (%)MP (%)ϒ (%)
5_1_182.176.0770.2512.1111.76
5_1_292.386.1979.2816.811.43
5_1_373.822.1262.473.8224.06
5_1_467.912.3151.647.9629.78
5_2_182.357.369.48.3510.35
5_2_283.068.2167.249.068.73
5_2_367.012.3250.857.0130.67
5_2_488.586.5469.48.974.88
5_3_185.787.260.3615.787.02
5_3_287.095.4565.3217.057.46
5_3_383.637.0259.2113.639.35
5_3_471.963.5557.3911.9624.49
5_4_172.558.3760.917.5519.08
5_4_291.15.2278.3515.123.68
5_4_384.175.6768.2114.8510.16
5_4_473.438.0964.8523.4518.48

The entries from Tables 37 demonstrated that the DIR ranges between 62%–93%. But the average DIR in the Joshi et al.7 approach was 93.8%. Thus, the DIR performance of the Joshi et al.7 method has been reduced by more than 11% on average due to varying combinations of thresholding and thinning methods. Moreover, the DIR in the zero MasterPrint detection test was also lowered by nearly 10% during the investigation. However, the average FAR has decreased by only 0.6% compared with Joshi et al.7 work. The average RR also reduced by 7.16%. The percentage of MasterPrint generated during the investigation ranges between 3.82%–23.45%, whereas for the Joshi et al.7 method, the range lies within 0.1%–2.03%. The analysis on the percentage of MasterPrints generated showed that the investigation witnessed a 14 times increase in the MasterPrints, whereas the average percentage of MasterPrints generated for the experiments was more than 11% compared to the original paper. These statistics thus demonstrated that the accuracy and MasterPrint mitigation performance of the Joshi et al.7 method had notably reduced when utilizing different pre-processing schemes. The entries for σ, ϝ, σ0, ϒ, and the percentage of MasterPrints generated during the investigations ranges between 62.01% and 93.32%, 1.83% and 9.41%, 50.2% and 80.9%, 1.43% and 36.16%, and 3.82% and 23.45%, respectively. Thus, the results from Tables 37 for identification and zero MasterPrint detection test appear diversely distributed for each of the five parameters under consideration. Therefore, no binding approach have shown remarkable variations invariably on several datasets. Hence, these performance measures do not form a concrete base to attribute any pair, specific thresholding or thinning method as preferable over others for fingerprint recognition.

6.2.

CMC and Watchlist ROC Curve Performance

The CMC and Watchlist ROC curves for each pre-processing combination on an individual dataset are depicted on the left and right portion of Figs. 15Fig. 16Fig. 17Fig. 1819, respectively. The highest and average rank-10 DIR achieved during the investigation was 98.6% and 87.35%, respectively. The plots also demonstrated that the DIR did not improve beyond rank-3. However, Joshi et al.7 method achieved 100% DIR on each dataset till rank-2. In the case of Watchlist ROC plots, ideally, the curves are expected to deviate marginally for DIR and show significant variations on FAR. But the plots from Figs. 15Fig. 16Fig. 17Fig. 1819 demonstrated that the identification rate reduced considerably compared to FAR. Moreover, the average DIR in zero MasterPrint generation test, σ0, has reduced from 76.14% in the Joshi et al.7 work to 65.96% during the investigations. Thus, delivered a degraded accuracy of more than 10%. The plots thus showed that the average identification accuracy was reduced by more than 12% when diverse combinations of pre-processing methods were employed in Joshi et al.7 work. However, it is still infeasible to label any single thresholding or thinning method, or their specific combination as more suitable for fingerprint recognition due to their inconsistent performance in CMC and Watchlist ROC plots.

Fig. 15

CMC plot (a) and Watchlist ROC plot (b) for CrossMatch Sample DB dataset.

JEI_32_1_010901_f015.png

Fig. 16

CMC plot (a) and Watchlist ROC plot (b) for FVC2002 DB1_A dataset.

JEI_32_1_010901_f016.png

Fig. 17

CMC plot (a) and Watchlist ROC plot (b) for FVC2002 DB2_A dataset.

JEI_32_1_010901_f017.png

Fig. 18

CMC plot (a) and Watchlist ROC plot (b) for NIST sd302b dataset.

JEI_32_1_010901_f018.png

Fig. 19

CMC plot (a) and Watchlist ROC plot (b) for NIST sd302d dataset.

JEI_32_1_010901_f019.png

7.

Discussion

The investigation illustrated that the partial fingerprint identification and MasterPrint mitigation method presented by Joshi et al.7 delivered low performance on crucial parameters when a variety of thresholding and thinning methods were employed in place of the adaptive thresholding approach8 and Zhang–Suen thinning algorithm9 employed in Joshi et al.7 work. The combination of Bernsen’s local image thresholding and K3M thinning algorithm produced above 90% DIR on three datasets, namely, DB1_A, NIST sd302b, and sd302d. But the approach generated more than 15% MasterPrints during the same experiment. Another binding of Bernsen’s local image thresholding and Hilditch thinning algorithm delivered greater than 90% DIR on FVC datasets while generating more than 11% MasterPrints. Thus, the DIR and the percentage of MasterPrint generated by various pre-processing combinations on five benchmark datasets showed that no pairing has consistently performed better over the other methods. Moreover, the entries from Tables 37 also confirmed that it is infeasible to suggest specific thresholding or thinning method as appropriate for partial fingerprint identification when experimented with given datasets. Precise thresholding and thinning of fingerprint ridges are the driving factors for accurate minutiae detection. The literature on image processing includes several approaches for thresholding and thinning grayscale images. All the thinning and thresholding approaches from the literature are incompatible for high-security applications like anonymous user identification through a fingerprint. Conclusively, the results confirmed that high-security applications and user identification systems employing biometric traits of an individual are greatly influenced by the choices made in the pre-processing stage.

Jabeen and Khan53 proposed a hybrid algorithm for false minutiae and boundary elimination. The algorithm removes false minutiae from a thinned fingerprint image arising due to bridges, spikes, and ridge breaks. Xiao and Raafat54 presented a false minutiae detection and elimination method. The authors represented false minutiae using structural and statistical approaches. Kim et al.50 method for false minutiae removal employed in the experimentation, and other similar approaches utilise thinned images for false minutiae detection and elimination. Hence, an inappropriate binarized and thinned fingerprint image is the agent behind the improper functioning of any false minutiae removal scheme.

Although there is no lower bound on the count of minutiae matched to decide if the fingerprints are matched, some legal procedures accept at least an 8 to 17 minutiae match for evidence.55 An average minutiae density on a 500 dpi fingerprint is estimated to be 0.246  minutiae/mm2.56 Such statistics should be employed to confirm that accurate minutiae detection and matching was carried out by a novel method proposed in future involving thresholding and thinning methods in the pre-processing phase. A robust novel fingerprint identification method should be experimented with fingerprint datasets employing diverse sensor types. A fingerprint identification system can generate MasterPrints if the fingerprint pre-processing approaches are not precisely tested on several datasets using different sensors.

The first method addressing the MasterPrint vulnerability for partial fingerprint identification was published by Joshi et al.7 A thorough search of the relevant literature on thresholding and thinning of fingerprint images yielded no prior work that experimented with diverse combinations in the pre-processing stage to study the impact on the identification accuracy, and the percentage of MasterPrint generated using a partial fingerprint identification method. The work in this paper is a maiden attempt to demonstrate that every image pre-processing approach might not be suitable for a high-security person identification using partial fingerprints. Thus the work initiates the requirement of image pre-processing methods extensively tested on fingerprint datasets from diverse sensors.

As future scope of this work, we will study the feasibility of robust partial fingerprint identification and MasterPrint mitigation method for poor quality latent fingerprints and utilize multiple pre-processing schemes to prove its practicability. As experimental results showed that pre-processing methods highly affect an identification system’s accuracy, the initial goal will be to experiment with different pre-processing and false minutiae removal approaches on poor quality partial and latent fingerprint datasets acquired using dissimilar sensor types.

8.

Conclusion

The MasterPrint vulnerability makes a partial fingerprint identification system susceptible to presentation attacks. This paper presented an investigation using sixteen combinations of thresholding and thinning methods on a partial fingerprint identification system to study their impact on identification accuracy and the percentage of MasterPrints generated. The results demonstrated that the existing partial fingerprint identification and MasterPrint mitigation approach performed unsatisfactorily when diverse combinations of thresholding and thinning methods were employed during the pre-processing stage. The investigation showed that existing thresholding and thinning methods are incompatible for minutiae-based fingerprint identification systems. The results also confirmed that average minutiae density is a preferable measure compared to the average minutiae count for feature extraction from the fingerprint image. Furthermore, the investigations demonstrated that the system performance can be barely improved by using false minutiae removal methods if the underlying thresholding and thinning approaches does not produce thinned images that retains the ridge patterns and ridge connectivity from the original grayscale fingerprint images. Thus, the work in this paper instigates the requirement of rigorously tested pre-processing methods suitable for fingerprint images.

References

1. 

A. Ross, K. Nandakumar and A. K. Jain, Handbook of Multibiometrics, 6 Springer( (2006). Google Scholar

2. 

A. Farina, Z. M. Kovács-Vajna and A. Leone, “Fingerprint minutiae extraction from skeletonized binary images,” Pattern Recognit., 32 (5), 877 –889 https://doi.org/10.1016/S0031-3203(98)00107-1 (1999). Google Scholar

3. 

N. K. Ratha, S. Chen and A. K. Jain, “Adaptive flow orientation-based feature extraction in fingerprint images,” Pattern Recognit., 28 (11), 1657 –1672 https://doi.org/10.1016/0031-3203(95)00039-3 (1995). Google Scholar

4. 

T. Sakai, M. Nagao and H. Matsushima, “Extraction of invariant picture sub-structures by computer,” Comput. Graph. Image Process., 1 (1), 81 –96 https://doi.org/10.1016/S0146-664X(72)80008-X (1972). Google Scholar

5. 

J. Feng, “Combining minutiae descriptors for fingerprint matching,” Pattern Recognit., 41 (1), 342 –352 https://doi.org/10.1016/j.patcog.2007.04.016 (2008). Google Scholar

6. 

Z. Guo and R. W. Hall, “Parallel thinning with two-subiteration algorithms,” Commun. ACM, 32 (3), 359 –373 https://doi.org/10.1145/62065.62074 CACMA2 0001-0782 (1989). Google Scholar

7. 

M. Joshi, B. Mazumdar and S. Dey, “Mitigating MasterPrint vulnerability by employing minutiae geometry,” J. Electron. Imaging, 31 (1), 013026 https://doi.org/10.1117/1.JEI.31.1.013026 JEIME5 1017-9909 (2022). Google Scholar

8. 

D. Bradley and G. Roth, “Adaptive thresholding using the integral image,” J. Graph. Tools, 12 13 –21 https://doi.org/10.1080/2151237X.2007.10129236 (2007). Google Scholar

9. 

T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Commun. ACM, 27 236 –239 https://doi.org/10.1145/357994.358023 CACMA2 0001-0782 (1984). Google Scholar

10. 

A. Roy, N. D. Memon and A. Ross, “Masterprint: exploring the vulnerability of partial fingerprint-based authentication systems,” IEEE Trans. Inf. Forensics Security, 12 (9), 2013 –2025 https://doi.org/10.1109/TIFS.2017.2691658 (2017). Google Scholar

11. 

P. Bontrager et al., “DeepMasterPrints: generating MasterPrints for dictionary attacks via latent variable evolution,” in IEEE 9th Int. Conf. Biometrics Theor., Appl. and Syst. (BTAS), (2018). https://doi.org/10.1109/BTAS.2018.8698539 Google Scholar

12. 

C. I. Watson, “NIST special database 9, mated fingerprint card pairs,” (1993). Google Scholar

13. 

X. Jia et al., “A cross-device matching fingerprint database from multi-type sensors,” in 21st Int. Conf. Pattern Recognit. (ICPR), 3001 –3004 (2012). Google Scholar

14. 

K. Arora and D. Garg, “A quantitative survey of various fingerprint enhancement techniques,” Int. J. Comput. Appl., 28 24 –29 https://doi.org/10.5120/3383-4691 (2011). Google Scholar

15. 

A. A. Abbood, G. Sulong and S. U. Peters, “A review of fingerprint image pre-processing,” Jurnal Teknologi, 69 (2), 79 –84 https://doi.org/10.11113/jt.v69.3111 (2014). Google Scholar

16. 

X. Wang and L. Chen, “An effective histogram modification scheme for image contrast enhancement,” Signal Process. Image Commun., 58 187 –198 https://doi.org/10.1016/j.image.2017.07.009 SPICEF 0923-5965 (2017). Google Scholar

17. 

Y. Qi et al., “A comprehensive overview of image enhancement techniques,” Arch. Comput. Methods Eng., 29 583 –607 https://doi.org/10.1007/s11831-021-09587-6 (2021). Google Scholar

18. 

N. Longkumer, M. Kumar and R. Saxena, “Contrast enhancement techniques using histogram equalization: a survey,” Int. J. Curr. Eng. Technol., 4 1561 –1565 https://doi.org/10.5121/ijcsea.2013.3402 (2014). Google Scholar

19. 

W. A. Mustafa and M. M. M. A. Kader, “A review of histogram equalization techniques in image enhancement application,” J. Phys. Conf. Ser., 1019 012026 https://doi.org/10.1088/1742-6596/1019/1/012026 JPCSDZ 1742-6588 (2018). Google Scholar

20. 

D. Maltoni et al., Handbook of Fingerprint Recognition, 2nd ed.Springer( (2009). Google Scholar

21. 

A. Shetter, S. Prajwalasimha and H. Swapna, “Finger print image enhancement using thresholding and binarization techniques,” in Second Int. Conf. Inventive Commun. and Comput. Technol. (ICICCT), (2018). https://doi.org/10.1109/ICICCT.2018.8473286 Google Scholar

22. 

D. Maio and D. Maltoni, “Direct gray-scale minutiae detection in fingerprints,” IEEE Trans. Pattern Anal. Mach. Intell., 19 (1), 27 –40 https://doi.org/10.1109/34.566808 ITPIDJ 0162-8828 (1997). Google Scholar

23. 

S. A. Sudiro, M. Paindavoine and T. M. Kusuma, “Simple fingerprint minutiae extraction algorithm using crossing number on valley structure,” in IEEE Workshop on Autom. Identification Adv. Technol., (2007). https://doi.org/10.1109/AUTOID.2007.380590 Google Scholar

24. 

M. Sonka, V. Hlavác and R. Boyle, Image Processing, Analysis and Machine Vision, 3rd ed.Thomson( (2008). Google Scholar

25. 

P. Stathis, E. Kavallieratou and N. Papamarkos, “An evaluation technique for binarization algorithms,” J. Univers. Comput. Sci., 14 (18), 3011 –3030 https://doi.org/10.3217/jucs-014-18-3011 (2008). Google Scholar

26. 

S. H. Shaikh, K. Saeed, N. Chaki, “Performance benchmarking of different binarization techniques for fingerprint-based biometric authentication,” in Proc. 8th Int. Conf. Comput. Recognit. Syst. CORES 2013, 237 –246 (2013). https://doi.org/10.1007/978-3-319-00969-8_23 Google Scholar

27. 

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., 9 (1), 62 –66 https://doi.org/10.1109/TSMC.1979.4310076 (1979). Google Scholar

28. 

W. Niblack, An Introduction to Digital Image Processing, Prentice Hall International( (1988). Google Scholar

29. 

J. Bernsen, “Dynamic thresholding of grey-level images,” in ICPR’86: Proc. Int. Conf. Pattern Recognit., 1251 –1255 (1986). Google Scholar

30. 

K. Buch et al., “Quantitative variations in texture analysis features dependent on mri scanning parameters: a phantom model,” J. Appl. Clin. Med. Phys., 19 (6), 253 –264 https://doi.org/10.1002/acm2.12482 (2018). Google Scholar

31. 

N. B. Rais, M. S. Hanif and I. A. Taj, “Adaptive thresholding technique for document image analysis,” in Proc. INMIC 8th Int. Multitopic Conf., 61 –66 (2004). https://doi.org/10.1109/INMIC.2004.1492847 Google Scholar

32. 

K. Khurshid et al., “Comparison of niblack inspired binarization methods for ancient documents,” Proc. SPIE, 7247 72470U https://doi.org/10.1117/12.805827 PSISDG 0277-786X (2009). Google Scholar

33. 

L. P. Saxena, “Niblack’s binarization method and its modifications to real-time applications: a review,” Artif. Intell. Rev., 51 673 –705 https://doi.org/10.1007/s10462-017-9574-2 AIREV6 (2017). Google Scholar

34. 

N. Senthilkumaran and S. Vaithegi, “Image segmentation by using thresholding techniques for medical images,” Comput. Sci. Eng. Int. J., 6 1 –13 https://doi.org/10.5121/cseij.2016.6101 (2016). Google Scholar

35. 

C. Eyupoglu, “Implementation of bernsen’s locally adaptive binarization method for gray scale images,” Online J. Sci. Technol., 7 68 –72 (2017). Google Scholar

36. 

Q. Li, X. Bai and W. Liu, “Skeletonization of gray-scale image from incomplete boundaries,” in 15th IEEE Int. Conf. Image Process., (2008). https://doi.org/10.1109/ICIP.2008.4711895 Google Scholar

37. 

M. Nazarkevych et al., “Evaluation of the effectiveness of different image skeletonization methods in biometric security systems,” Int. J. Sens. Wireless Commun. Control, 11 542 –552 https://doi.org/10.2174/2210327910666201210151809 (2021). Google Scholar

38. 

C. J. Hilditch, “Linear skeletons from square cupboards,” Machine Intelligence 4, 403 Edinburgh University Press( (1969). Google Scholar

39. 

P. K. Saha, G. Borgefors and G. S. di Baja, “A survey on skeletonization algorithms and their applications,” Pattern Recognit. Lett., 76 3 –12 https://doi.org/10.1016/j.patrec.2015.04.006 PRLEDG 0167-8655 (2016). Google Scholar

40. 

F. W. M. Stentiford and R. G. Mortimer, “Some new heuristics for thinning binary handprinted characters for OCR,” IEEE Trans. Syst. Man Cybern., SMC-13 (1), 81 –84 https://doi.org/10.1109/TSMC.1983.6313034 (1983). Google Scholar

41. 

K. Saeed et al., “K3M: a universal algorithm for image skeletonization and a review of thinning techniques,” Int. J. Appl. Math. Comput. Sci., 20 (2), 317 –335 https://doi.org/10.2478/v10006-010-0024-4 (2010). Google Scholar

42. 

M. Tabedzki, K. Saeed and A. Szczepanski, “A modified K3M thinning algorithm,” Int. J. Appl. Math. Comput. Sci., 26 (2), 439 –450 https://doi.org/10.1515/amcs-2016-0031 (2016). Google Scholar

43. 

K. Saeed, M. Rybnik and M. Tabedzki, “Implementation and advanced results on the non-interrupted skeletonization algorithm,” Proc. SPIE, 2124 601 –609 https://doi.org/10.1007/3-540-44692-3_72 PSISDG 0277-786X (2001). Google Scholar

44. 

K. Saeed, “Text and image processing: non-interrupted skeletonization,” in Proc. 1st Int. IEEE Conf. Circuits, Syst., Commun. and Comput.—IEEE-CSCC’01, 350 –354 (2001). Google Scholar

45. 

J. Yu and Y. Li, “Improving hilditch thinning algorithms for text image,” in Proc. Int. Conf. E-Learning, E-Business, Enterprise Inf. Syst., and E-Government, EEEE ’09, 76 –79 (2009). https://doi.org/10.1109/EEEE.2009.44 Google Scholar

46. 

P. Patil, S. Suralkar and F. Sheikh, “Rotation invariant thinning algorithm to detect ridge bifurcations for fingerprint identification,” in 17th IEEE Int. Conf. Tools with Artif. Intell. (ICTAI05), (2005). https://doi.org/10.1109/ICTAI.2005.112 Google Scholar

47. 

S. Yokoi, J. Toriwaki and T. Fukumura, “Topological properties in digitized binary pictures,” Syst. Comput. Controls, 4 32 –39 SYCCBB (1973). Google Scholar

48. 

P. Subashini, “Optimal thinning algorithm for detection of FCD in MRI images,” Int. J. Sci. Eng. Res., 2 (9), 1 –7 (2011). Google Scholar

49. 

G. Limei, Z. Yingbin and H. Duan, “A fingerprint minutiae extraction method in quantum thinned binary image,” Int. J. Theor. Phys., 60 1883 –1894 https://doi.org/10.1007/s10773-021-04807-y IJTPBM 0020-7748 (2021). Google Scholar

50. 

S. Kim, D. Lee and J. Kim, “Algorithm for detection and elimination of false minutiae in fingerprint images,” Lect. Notes Comput. Sci., 2091 235 –240 https://doi.org/10.1007/3-540-45344-X_34 LNCSD9 0302-9743 (2001). Google Scholar

51. 

D. Blackburn et al., “Biometric testing and statistics,” Washington, DC (2006). Google Scholar

52. 

B. DeCann and A. Ross, “Relating ROC and CMC curves via the biometric menagerie,” in IEEE Sixth Int. Conf. Biometrics: Theor., Appl. and Syst., BTAS 2013, 1 –8 (2013). https://doi.org/10.1109/BTAS.2013.6712705 Google Scholar

53. 

S. Jabeen and S. A. Khan, “A hybrid false minutiae removal algorithm with boundary elimination,” in IEEE Int. Conf. Syst. of Syst. Eng., (2008). https://doi.org/10.1109/SYSOSE.2008.4724177 Google Scholar

54. 

Q. Xiao and H. Raafat, “Fingerprint image postprocessing: a combined statistical and structural approach,” Pattern Recognit., 24 985 –992 https://doi.org/10.1016/0031-3203(91)90095-M (1991). Google Scholar

55. 

Encyclopedia of Biometrics, 2nd ed.Springer US( (2015). Google Scholar

56. 

S. Pankanti, S. Prabhakar and A. K. Jain, “On the individuality of fingerprints,” IEEE Trans. Pattern Anal. Mach. Intell., 24 (8), 1010 –1025 https://doi.org/10.1109/TPAMI.2002.1023799 ITPIDJ 0162-8828 (2002). Google Scholar

Biography

Mahesh Joshi is currently a research scholar in the Department of Computer Science and Engineering at Indian Institute of Technology Indore, India. He received his MTech degree in Computer Science and Engineering from the Visvesvaraya National Institute of Technology, Nagpur, India. His current research interests include fingerprint biometrics and pattern recognition. He has six publications from refereed journals, book chapters, and international conference to his credit.

Bodhisatwa Mazumdar is currently working as an assistant professor in the Department of Computer Science and Engineering at Indian Institute of Technology Indore, India. He received his BTech degree from the University of Kalyani and MS degree in electronics and electrical communication engineering from IIT Kharagpur, India. He received his PhD in computer science and Engineering from IIT Kharagpur. His research area includes power based side-channel analysis of cryptographic primitives such as S-boxes, and security vulnerability analysis of emerging technologies in VLSI Design. He received the best student paper award in the 25th IEEE International Conference on VLSI Design 2012, Hyderabad, India. He was a post-doctoral associate in the Design for Excellence Laboratory at New York University Abu Dhabi, Abu Dhabi. He has published over 50 research articles (including papers in international journals, conferences).

Somnath Dey is currently working as an associate professor in the Department of Computer Science and Engineering at Indian Institute of Technology Indore, India. He received his BTech degree in information technology from the University of Kalyani in 2004. He completed his MS (by research) and PhD degrees in information technology from the School of Information Technology, Indian Institute of Technology Kharagpur, in 2008 and 2013, respectively. His research interest includes biometric security, biometric template protection, biometric crypto system. He has published over 40 research articles (including papers in international journals, conferences, and book chapters).

© 2023 Society of Photo-Optical Instrumentation Engineers (SPIE)
Mahesh Joshi, Bodhisatwa Mazumdar, and Somnath Dey "Investigating the impact of thresholding and thinning methods on the performance of partial fingerprint identification systems: a review," Journal of Electronic Imaging 32(1), 010901 (5 January 2023). https://doi.org/10.1117/1.JEI.32.1.010901
Received: 13 April 2022; Accepted: 30 November 2022; Published: 5 January 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Fingerprint recognition

System identification

Image enhancement

Image segmentation

Biometrics

Image processing

Windows

Back to Top