Open Access
20 April 2023 Comparison of algorithms for contrast enhancement based on triangle orientation discrimination assessments by convolutional neural networks
Author Affiliations +
Abstract

Within the last decades, a large number of techniques for contrast enhancement has been proposed. There are some comparisons of such algorithms for few images and figures of merit. However, many of these figures of merit cannot assess usability of altered image content for specific tasks, such as object recognition. In this work, the effect of contrast enhancement algorithms is evaluated by means of the triangle orientation discrimination (TOD), which is a current method for imager performance assessment. The conventional TOD approach requires observers to recognize equilateral triangles pointing in four different directions, whereas here convolutional neural network models are used for the classification task. These models are trained by artificial images with single triangles. Many methods for contrast enhancement highly depend on the content of the entire image. Therefore, the images are superimposed over natural backgrounds with varying standard deviations to provide different signal-to-background ratios. Then, these images are degraded by Gaussian blur and noise representing degradational camera effects and sensor noise. Different algorithms, such as the contrast-limited adaptive histogram equalization or local range modification, are applied. Then accuracies of the trained models on these images are compared for different contrast enhancement algorithms. Accuracy gains for low signal-to-background ratios and sufficiently large triangles are found, whereas impairments are found for high signal-to-background ratios and small triangles. A high generalization ability of our TOD model is found from the similar accuracies for several image databases used for backgrounds. Finally, implications of replacing triangles with real target signatures when using such advanced digital signal processing algorithms are discussed. The results are a step toward the assessment of those algorithms for generic target recognition.

1.

Introduction

For remote sensing applications and reconnaissance, acquisition and operation of cameras in different spectral bands are required, and each has its own pros and cons. The best possible choice among devices for procurement is therefore dependent on the imager performance for the desired task, e.g., the detection, recognition, or identification (DRI) of distant targets with a background composed of vegetation, urban structures, and sky. Camera data can be acquired in field trials for characterization of single devices. However, these measurements are time consuming and expensive. Furthermore, the possession of the device is required. Therefore, modeling and image-based simulation of imagers are useful and important for the assessment of imagers. Such tools become even more important when scene-dependent advanced digital signal processing (ADSP) techniques are used in the device, for their impact on performance is difficult to predict. In this paper, the effect of contrast enhancement (CE) algorithms, which have so far mainly evaluated in terms of esthetic perception, is considered.

Triangle orientation discrimination (TOD)1 is a well-established image-based approach for the characterization of electro-optical system performance, especially for range performance assessment in remote sensing applications.2 It models the DRI tasks for real targets by a simplified recognition task. The original idea was that an observer has to determine the orientation of an equilateral triangle directing in four directions (up, down, left, and right) shown on a display, which is fed by an imaging system. The ability to clearly discriminate the orientation is reduced depending on different types of degradation, e.g., optical diffraction blur and sensor noise.

However, due to the resurgence of machine learning, automatic target recognition3 is becoming increasingly important. The importance of machine vision applications also led to the emergence of scalable compression frameworks4,5 aimed at high lossy compression while simultaneously preserving image quality for machine and human vision. In contrast to human observers, the performance of these methods does not depend on properties of the display but merely on the digital output of the imaging system. A prominent and frequently used approach for machine vision is convolutional neural networks (CNN). Such CNN models also have been trained on artificial images of triangles to perform the TOD discrimination and validated on acquired camera data.6 Therefore, these models can be used for automated camera tests in the lab by means of scene projectors.7

In this paper, CNN models for TOD discrimination are trained and validated on degraded artificial images of single triangles superimposed over natural backgrounds from Open Images V6.8 In addition, the training data are processed by CE algorithms from Table 1 with equal probabilities. Then, a trained model is validated on images with varying error levels of background σbackground and Gaussian noise σnoise. Parts of this work have already been published elsewhere.32

Table 1

25 methods for CE.

AlgorithmAuthorYear
AGC9Huang et al.2013
AGCWD9Huang et al.2013
BBHE10Kim et al.1997
BPDHE11Ibrahim et al.2007
BPHEME12Wang et al.2005
CLAHE13Zuiderveld et al.1994
DCRGC14Wang et al.2009
DCTIE15Tang et al.2003
DSIHE16Wang et al.1999
EHS17Coltuc et al.2006
ESIHE18Tan et al.2019
FHSABP19Wang et al.2008
MMBEBHE20Chen et al.2003
MMSICHE21Singh et al.2014
MPHEBP22Wangsritong et al.1998
NMHE23Poddar et al.2013
QDHE24Ooi et al.2010
RMSHE25Chen et al.2003
RSIHE26Sim et al.2007
RSWHED27Kim et al.2008
RSWHEM27Kim et al.2008
SUACE28Bandara et al.2017
LDRHE29Bulut et al.2022
LRM30Fahnestock et al.1983
WMSHE31Wu et al.2010

In Sec. 2, the considered CE algorithms, the model setup, and the training procedure are described. Section 3 shows the accuracies on validation images with varying background and noise levels. Accuracy differences between individual CE algorithms and identity are shown for varying values of the signal-to-background ratio SNRbackground, signal-to-noise ratio SNRnoise, and triangle circumradius r. Finally, results are discussed in Sec. 4, which concludes the paper.

2.

Methods

2.1.

Considered Contrast Enhancement Algorithms

Several methods for CE have been proposed within the past decades. Various modifications to conventional global histogram equalization have been proposed to counteract mean brightness shifts11,12 leading to annoying artifacts and allowing smooth transitions to identity.25,26 Also learning-based methods3337 as well as methods based on image decomposition38,39 have been proposed. The decomposition often relies on color information, making the methods inapplicable to single channel image data. However, the scope of this work is limited to easily implemented algorithms, given in Table 1, that operate on single channel image data.

2.2.

Data Generation

For the training of models for TOD,1 images of triangles are generated with varying contrasts, sizes, and four orientations, i.e., up, down, left, and right. Misalignment angles uniformly distributed in [15  deg,15  deg] are added to the orientation angles of the triangles to make the models more robust to misalignment. Exceeding the maximum rotation angle 15 deg would lead to incorrect labeling because rotations of an equally sided triangle by 30 deg result in other labeled orientations due to the 120 deg rotational symmetry. This rotation is crucial when applying models on real camera data because some misalignment between the field of view of a camera and a target is unavoidable.

2.3.

Background Overlay

Background images are extracted from OpenImages V68 as the random square crops. RGB images are converted to floating-point grayscale images. Mean μcrop and standard error σcrop are calculated within these crops. Then, gray levels Itriangle+overlay(x,y) of an image with single triangle and background overlay are calculated as

Eq. (1)

Itriangle+overlay(x,y)={cconstant+atriangle(x,y)trianglecconstant+σset·Icrop(x,y)μcropσ˜cropelse.

cbackground is a constant gray level over the entire image. atriangle is an offset value only added for pixels related to the triangle. In Eq. (1), the pixel values of the image crop Icrop(x,y) are normalized by subtracting the mean μcrop and dividing by the corrected error, and

Eq. (2)

σ˜crop={σcropσ>thres1else,
with thres=104. Without this correction, Eq. (1) is not well-defined for uniform image sections because in this case the denominator would be σcrop=0. Then, the normalized gray levels related to the cropped natural background are scaled to have a specific standard deviation σset. Therefore, the signal-to-background ratio is expressed as

Eq. (3)

SNRbackground  [dB]=10log10(atriangleσset).

2.4.

Degradations

Several image degradations representing typical camera effects are applied. Temporal noise is applied as uncorrelated additive Gaussian noise. Fixed pattern noise of a sensor is modeled as line- and column-based additive Gaussian noise. Linear motion blur on the triangle is applied to represent moving targets. Stabilization errors due to camera vibration are applied as linear motion blur and Gaussian blur on the triangle with a background overlay. Blur due to optical diffraction by circular apertures is applied by filters

Eq. (4)

g(x,y)=(2J1(r/s)/(r/s))2,
with r=x2+y2. These filters represent Airy disks40 as the diffraction patterns of a circular aperture. J1(x) is the Bessel function of the first kind and first order. To provide optical diffraction blur for varying values of aperture diameter D, detector pixel pitch p, wavelength λ, and focal length f, a dimensionless scaling factor s is introduced as

Eq. (5)

s=λfπDp.

A variety of physical parameters (λ, f, D, and p) can be realized by random sampling of s from a uniform distribution in [0.1,smax]. smax=10 is chosen to limit the proportion of severely degraded images, which aggravate the model training due to little possible accuracy gains compared with statistical fluctuations. Optical diffraction blur is applied by spatial filtering with random 2D kernels of width and height K=6smax. The kernel size K is limited due to lack of information beyond the borders of images of finite size. The scaling factor ssmax is therefore also limited because higher values lead to radial kernel profiles biased by clipping effects due to the limited kernel size. To reduce aliasing due to the oscillatory form of the Airy disk [Eq. (4)], larger kernels fij of fosK×fosK pixels are generated with an oversampling factor fos=8. These extended kernels are downsampled by average pooling with this oversampling factor to give K×K kernels gij. Normalized filter kernels are then formed as

Eq. (6)

g˜ij=gijiKjKgij.

Several aliasing effects may occur due to small detector fill factors ff<1 (the ratio between the detector dimension and pitch) or different shapes of detector footprints, e.g., rhombic or circular. These effects can be realized by masking extended kernels of Airy disks fij with the detector profile before average pooling. However, this option is not used in this work due to the rare use of nonsquare detectors, low signal-to-noise ratio for low fill factors,41 and faster image generation.

2.5.

Contrast Enhancement

The algorithms in Table 1 are applied on 50% of the degraded image with equal probabilities. Due to the complex and divergent control flow of some algorithms, these methods are implemented and calculated by a separate application on the CPU. In Fig. 1, pristine training examples, as well as degraded and ADSP processed ones, are shown. These training examples are generated online during training to have a practically infinite amount of training data immune to overfitting. However, the source of background images and the number of possible crops is large but finite.

Fig. 1

Training examples with associated labels: left (0), up (1), right (2), and down (3). (a) Pristine training examples, (b) training examples with natural background, degradations, and CE.

OE_62_4_048103_f001.png

2.6.

Model Setup

A conventional CNN architecture shown in Fig. 2 is used for TOD classification on images of dimensions 2n×2n. To facilitate the model training, the input image is normalized by linear shifting and scaling of pixel values to have a mean of 0 and a standard deviation of 1 over spatial dimensions. Uniform input images with a standard deviation of 0 are not scaled. Then, the normalized image is downsampled by a chain of building blocks until the spatial dimensions are reduced to 2. Each building block consists of two 2D convolutional layers with 3×3 kernels and rectified linear unit activations (ReLUs) and a subsequent 2×2  max pooling layer. Hence, downsampling by a factor 2 is applied per block. The spatial dimensions are reduced, and the number of feature maps, given as

Eq. (7)

nfeature-maps=N0·gi,
increases for each block i0. is the floor function yielding the largest integer that is smaller than the argument. Two subsequent dense layers and a final softmax layer provide the probabilities for the four orientations. A default configuration with the initial number of filters N0=20, growth factor g=1.2, and first dense layer size L=1024 is arbitrarily chosen.

Fig. 2

CNN model architecture for TOD classification of 64×64 grayscale images. A 64×64 image is fed into a chain of blocks. Each block consisting of two 2D convolutional layers with ReLU activation and subsequent max pooling layer downsamples the spatial dimensions while increasing the number of feature maps. The last block is followed by two dense layers and a final softmax layer to provide probabilities for all directions (left, up, right, and down).

OE_62_4_048103_f002.png

2.7.

Model Training

The models are trained and evaluated by Python 3.9/Tensorflow 2.8. For optimization, ADAM42 with a learning rate η=0.001 is used. Weights are initialized by He normal initialization.43 Models are trained for N=1000 iterations. Despite slower training, techniques for acceleration of training, such as batch normalization,44 weight normalization,45 or adaptive gradient clipping,46 were deliberately omitted to achieve smaller models with faster inference, which are compatible for running on edge TPUs.47 The loss function is cross entropy.

In each iteration, new sample images of triangles with background overlays are generated on the fly during the training for data augmentation. Triangles have a random size, position, and arbitrary orientation angle in [0 deg, 360 deg]. In addition, these images are impaired by the prescribed degradations. 50% of the training images were enhanced with one of the 25 methods for CE with equal probabilities. Corresponding disjoint sets of background images are randomly chosen from the respective partition of the OpenImages V6 database. For evaluation of model performance over the degradation parameters, i.e., SNRbackground, SNRnoise, and triangle size, images are generated in the same way as background images chosen from the test subset of the database.

The question arises how large the percentage of degraded and ADSP processed images in the training data should be to obtain acceptable accuracies on validation sets of pristine and degraded images. 64×64 models with different percentages of degraded and processed images in the training data were trained and evaluated on different kinds of validation data. The respective accuracies on the validation data are shown in Fig. 3. It can be observed that models trained only with pristine images perform very bad on degraded and ADSP processed images with natural backgrounds. A slight increase of the percentage significantly raises the validation accuracies on degraded imagery. On the other hand, models trained with a high percentage of degraded images still perform very well on pristine images. Therefore, to make the best use of the model capacity for the image degradations and ADSP methods, all models mentioned below are trained with 100% degraded images, whereas ADSP is applied with 50% probability.

Fig. 3

Dependency of the validation accuracy on the composition of training sets. Horizontal: percentage of degraded images in the training data. Vertical: accuracies on different validation sets, each with N=Norientations·Nbackgrounds·Nsamples=400,000 images, with Nbackgrounds=1000, Norientations=4, Nsamples=100. “All degradations” include images enhanced with one of the algorithms in Table 1.

OE_62_4_048103_f003.png

3.

Results

3.1.

Dependency of Accuracy on Background Variance

A trained 64×64 model is validated on images of a fixed target, a centered triangle with a circumradius of r=10  pixel. The triangle circumradius r is converted to the often-used square root area S=A1 using the Pythagorean theorem:

Eq. (8)

S[pixel]=334r[pixel]1.14r[pixel].

1000 random crops of different background images from OpenImages V68 were used. The background variance σbackground of gray levels was varied to have different SNRbackground[0,1,2,3,4,5,10,20]  dB. White Gaussian noise was added to have SNRnoise[0,5,10]  dB, where

Eq. (9)

SNRnoise[dB]=10log10(atriangleσnoise),
and atriangle is about one-sixth of the dynamic range. In Fig. 4, accuracies over SNRbackground for different SNRnoise and corresponding example images for the lowest SNRnoise=0  dB are shown.

Fig. 4

(a) Accuracy over SNRbackground(dB): fixed triangle circumradius r(pixel)=10 and varying SNRnoise(dB). (b) Example validation images: fixed SNRnoise=0  dB, four orientations and eight levels of SNRbackground[0,1,2,3,4,5,10,20]  dB.

OE_62_4_048103_f004.png

Obviously, accuracies are about 100% for SNRbackground5  dB and SNRnoise5  dB. The accuracies drop monotonically with decreasing SNRbackground. A similar behavior can be observed for varying triangle sizes, as shown in Fig. 5. As expected, the accuracies also drop for decreasing triangle sizes. Further variation of the relative triangle position in the subpixel range shows high fluctuations of accuracies for low triangle circumradius r=1  pixel. This finding is consistent with the problem of recognition near the resolution limit mentioned before.1

Fig. 5

(a) Accuracy over SNRbackground(dB): fixed SNRnoise(dB)=40 and varying triangle circumradius r. (b) Example validation images: triangle circumradius r(pixel)=2, four orientations, and eight levels of SNRbackground(dB)[0,1,2,3,4,5,10,20].

OE_62_4_048103_f005.png

3.2.

Upscaling of Receptive Field

Models according to the CNN architecture shown in Fig. 2 were trained for various resolutions of receptive fields, i.e., 128×128, 256×256, 512×512, and 1024×1024. For 512×512 and 1024×1024, a reduction of learning rate η=104 was required to achieve significant model improvements compared with the initial model states. Otherwise, model performances stagnated on average at 25% guessing rates. In Fig. 6, validation accuracies over SNRbackground are shown for different sizes of receptive fields.

Fig. 6

Validation accuracy over SNRbackground(dB) for different sizes of receptive fields and SNRnoise[0,5,20]dB based on Ntotal=Ndirection·Nbackground=4000 images containing a centered triangle with circumradius r=10  pixel, superposed by Nbackground=1000 different backgrounds formed by random image crops of respective sizes from the OpenImages V6 test set and Ndirection=4 triangle orientations. For increasing field size, the model performance decreases.

OE_62_4_048103_f006.png

There is a general trend of decreasing accuracies for larger receptive fields. This may be due to the fact that larger receptive fields can contain more objects with high similarity to triangles. Furthermore, the growth factor g=1.2 for the number of feature maps per block nfeature-maps may be insufficiently small to provide enough model capacity for an increasing range of triangle sizes to handle. A surprising fact is the lower accuracies for higher SNRnoise=20  dB compared with SNRnoise=5  dB and receptive fields of 256×256  pixels and larger. This might indicate beneficial properties of Gaussian noise by suppressing structures in the background resembling the triangle target.

3.3.

Comparison of Methods for Contrast Enhancement

A trained 64×64 model was validated by images processed with the 25 ADSP algorithms shown in Table 1. Many algorithms only operate on integer pixel values based on gray level distributions, which are widespread for natural images. Hence, to prevent saturation due to clipping, the pixel values are shifted to have a mean of half of the dynamic range, upscaled by a factor 281=255 and converted to 8 bit. After ADSP processing, this shifting and upscaling is reverted, and pixel values are converted to floating point numbers.

In Fig. 7, differences of accuracies between single CE algorithms and the identity are shown for varying SNRbackground and SNRnoise. For convenience, the algorithms were ranked with respect to the maximum value, and only the top 10 algorithms are shown.

Fig. 7

Accuracy difference ΔI(%)=ICE(%)I0(%) for top 10 CE algorithms ranked with respect to the maximal value. Accuracies ICE (I0) for every data point are based on evaluations of the 64×64 TOD model on Ntotal=Ndirection·Nbackground=4000 artificial images with (without) CE containing a centered triangle with circumradius r=10  pixel, Ndirection=4 triangle orientations, and Nbackground=1000 different backgrounds formed by random image crops from the OpenImages V68 test set. In addition, before CE, Gaussian noise was added: (a) SNRnoise=0  dB, (b) SNRnoise=5  dB, and (c) SNRnoise=20  dB.

OE_62_4_048103_f007.png

It can be observed that there are accuracy gains for low SNRbackground<5  dB, and the accuracy differences are quite similar for the top 10 algorithms. Because the accuracy is about 100% for SNRbackground5  dB and SNRnoise5  dB without ADSP processing according to Fig. 4, no significant improvement by CE can be expected for these cases. By contrast, a severe degradation of model performance occurs for some CE algorithms and high SNRnoise=20  dB.

To validate CE algorithms on a variety of degradations, different triangle parameters and degradation parameters were varied and uniformly distributed in the ranges given in Table 2.

Table 2

Boundaries for uniformly distributed triangle parameters and degradation parameters with the image dimensions I=64 and the dynamic range DR=255.

ParameterMinMax
Background level cbackground0.25DR0.75DR
Amplitude atriangle0.25DR0.25DR
Triangle circumradius r00.25I
Horizontal position0.25I0.75I
Vertical position0.25I0.75I
SNRbackground(dB)020
SNRnoise(dB)020
Orientation angle α(deg)0360

Ntarget=1000 random samples of triangles and degradation parameters were combined with Nbackgrounds=1000 natural backgrounds as random crops from Open Images V6.8 This procedure was repeated Nchunks=10 times with varying random seeds, resulting in different triangles and background images. Model accuracies were calculated on Ntotal=NchunksNtargetsNbackgrounds=107 images with 64×64  pixels. The same procedure was repeated, with each of the 25 CE algorithms from Table 1 being applied respectively as the final step. Compared with grid variation of individual triangle and degradation parameters, random sampling of many of these parameters allows for investigation of individual parameters by arbitrary parameter cuts, whereas other parameters are widely distributed. This gives better insights on possible fluctuations on model performances when those parameters are unknown.

As shown in Fig. 8, accuracy differences between each of the 25 CE algorithms and identity were calculated for parameter cuts of the triangle circumradius r and the signal-to-background ratio SNRbackground. Only accuracies on images with values in r[1,3] pixel (left), r[14,16]  pixel (right), SNRbackground[0,2]  dB (left), and SNR[18,20]  dB (right) were selected. The interquartile ranges (IRQ), shown as orange boxes, contain values between the 25%-percentile and the 75%-percentile. The IRQs are extended by whiskers by 1.5IRQ at both sides at maximum, but they are limited by the respective minimal and maximal values in the data. Outliers are shown as circle markers.

Fig. 8

Whisker-box plots of accuracy differences of the 64×64 model between 25 CE algorithms and identity based on varying cuts for (a) triangle circumradius r[1,3]  pixel, (b) r[14,16]  pixel, SNRbackground[0,2]  dB (left), and SNRbackground[18,20]  dB (right). Insets show examples of centered triangles for r, SNRbackground in the corresponding ranges with SNRnoise=. For each algorithm, means (green triangle marker), medians (red lines), IQRs (orange boxes) of accuracy differences for the counts in the respective parameter intervals are shown (further details in the text). At high SNRbackground and low triangle circumradius r, the 25 CE algorithms lead to significant impairment of the model accuracies.

OE_62_4_048103_f008.png

Obviously, a high SNRbackground and a low triangle circumradius r lead to significant impairment of model accuracies by most of the 25 CE algorithms. Accuracy differences at high triangle circumradius r and high SNRbackground (right bottom) show small IRQs, as the accuracy is often saturated at 100% for high SNRnoise. Hence, accuracies for low SNRnoise are rendered as outliers. For high r[14,16]  pixel and low SNRbackground=[0,2]  dB (left bottom) only, some CE algorithms show accuracy gains. Also, monotonic transitions of accuracy differences for varying r and SNRbackground were observed. The reason for the significant impairment at high SNRbackground could be due to the fact that a narrow gray level distribution of background values leads to excessive enhancement of the background by most CE algorithms, resulting in textures with a low dynamic range, steep edges, and a high similarity to the triangle to be discriminated. On the other hand, a large triangle reduces the number of background pixels and hence their contribution to the gray level distribution of the entire image. Most of the investigated CE algorithms depend on the image gray level distribution.

3.4.

Generalization on Background Images of Different Image Databases

To investigate the ability of the TOD model to generalize to a larger variety of background images, the trained 64×64 model was validated on artificial images with a single centered triangle with a fixed circumradius of r=10  pixel superposed with background images resulting from random crops of images from different image databases. Examples of such image crops with 64×64  pixels are shown in Fig. 9 for different image databases: Pascal VOC,48 ILSVRC2012,49 FLIR ADAS,50 OpenImages V6,8 Stanford dogs,51 Oxford flowers 102,52 Caltech 101,53 and Gaussian noise.

Fig. 9

Examples of 64×64 random crops of different image databases used as backgrounds: Pascal VOC,48 ILSVRC2012,49 FLIR ADAS,50 OpenImages V6,8 Stanford dogs,51 Oxford flowers 102,52 Caltech 101,53 and Gaussian noise. Image are converted to single-channel data by averaging over channels of RGB data. For clarity, example images are shown using the colormap “viridis,” and pixel values are scaled to have a mean that equals the center of the dynamic range and a standard deviation of one-fourth of the dynamic range. Compared with other image databases, FLIR ADAS contains images with a high noise content.

OE_62_4_048103_f009.png

In Fig. 10, the model accuracies over N=Ndirection·Nbackground images and different SNRbackground are shown, with Ndirection=4 triangle orientations and Nbackground=1000 different backgrounds from several image databases. In addition, the generated artificial images are impaired by Gaussian noise with a high noise level SNRnoise=0  dB (left) and a low noise level SNRnoise=20  dB. It can be observed that model accuracies are very similar for most of the image databases. In contrast, background images of Gaussian noise yield significantly better accuracies than those of the image databases. Images of FLIR ADAS50 show accuracies between those of Gaussian noise and the image databases, which may be due to the relatively high noise content in the FLIR ADAS images. This fact indicates that structured backgrounds from the image databases have a higher degradational effect on the triangle recognition than Gaussian noise for equal standard deviations of pixel value fluctuations.

Fig. 10

Model accuracies over SNRbackground(dB). For each data point, N=Ndirection·Nbackground artificial images are generated with Ndirection=4 triangle orientations and Nbackground=1000 different backgrounds used from random crops of different image databases. Gaussian noise is added with (a) SNRnoise=0  dB and (b) SNRnoise=20  dB.

OE_62_4_048103_f010.png

The qualitative behavior of the model accuracy is similar for different databases when applying the methods for CE. The same is true for the ranges of SNRbackground values, for which CE yields an improvement in accuracy. For convenience, only an example for applying CLAHE is shown in Fig. 11.

Fig. 11

Model accuracy differences ΔI=ICLAHEIIdentity over SNRbackground(dB) with accuracies ICLAHE based on images enhanced by CLAHE and accuracies IIdentity for images without CE. For each data point, N=Ndirection·Nbackground artificial images are generated with Ndirection=4 triangle orientations and Nbackground=1000 different backgrounds used from random crops of different image databases. Gaussian noise is added with (a) SNRnoise=0  dB and (b) SNRnoise=20  dB.

OE_62_4_048103_f011.png

3.5.

Variation of Model Size

The used architectures so far were an arbitrary initial choice. One might ask if similar/better accuracies could have been achieved by smaller/larger models. To answer this question, further models were trained based on the default configuration (Sec. 2.6) with modifications of single parameters, i.e., the initial number of filters N0, the growth factor g, the dense layer size L, and the number of dense layers. In Fig. 12, validation accuracies for varying initial number of filters N0, the growth factor g, the dense layer size L, and the number of extra dense layers in addition to the final dense layer with four units are shown. Models are validated on images with Norientation=4 orientations, Nbackground=1000, and Nsample=100 samples, resulting in Ntotal=Norientations·Nbackground·Nsample=400,000 images. Accuracies denoted with “all degradations” contain images enhanced by one of the CE algorithms (Table 1) with equal probabilities. Furthermore, validation accuracies are compared for different activations, i.e., leaky ReLU,54 exponential linear unit (ELU),55 Gaussian error linear unit (GELU),56 scaled exponential linear unit (SELU),57 APLU,58 tanh, sigmoid,59 softplus,59 softsign,59 and swish.60 It can be observed that reductions of the dense layer size as low as L=22, the growth factor g=0.7, and the number of filters N0=10 result in comparable accuracies compared with the default configuration. Even for a varying number of extra dense layers with L=1024 units in addition to the final dense layer with four units, there are only slight variations in accuracies. From the 11 investigated activations, the ReLU59 from the default configuration (Sec. 2.6) performs very well compared with most other activations. adaptive piecewise linear unit (APLU) and sigmoid did not converge at all above the guessing rate of 25%.

Fig. 12

Model comparisons for (a) varying dense layer size, (b) growth factor g, (c) initial number of filters N0, (d) the number of extra dense layers with L=1024 units, and (e) different activation functions. Accuracies on different validation sets, each with N=Norientations·Nbackgrounds·Nsamples=400,000 images, with Nbackgrounds=1000, Norientations=4, Nsamples=100. “All degradations” include CE with one of the algorithms in Table 1.

OE_62_4_048103_f012.png

3.6.

Model Complexity

We did benchmarks of our trained TOD model for 64x64 pixels on our machine (a Ryzen 9 3900X processor with an NVIDIA GeForce RTX 2080Ti graphics card and 64GB RAM). Table 3 gives the average running time on an NVIDIA GeForce RTX2080Ti, as well as the number of parameters and floating point operations (FLOPs), which equals twice the number of multiply-accumulate computations (MACs).

Table 3

Model properties and benchmark results on an NVIDIA GeForce RTX 2080Ti.

ModelInput shape (height, width,# channels)ParametersFLOPs = 2*MACs (single image)Average running time (ms) (batch size of 64 images, 100 calls, 95% confidence)
TOD64×64×10.2M0.2M8.0±0.2
TOD128×128×10.3M0.3M8.9±0.3
TOD256×256×10.4M0.4M20±4
TOD512×512×10.5M0.5M80±12
TOD1024×1024×10.7M0.7M87±13 (16 images)
VGG1661224×224×3138G30.9G110±10
VGG1961224×224×3143G39.2G133±11
ResNet50 v262224×224×325.6G7.96G63.7±1.5
Inception v363299×299×323.8G11.6G105.8±1.4
Xception64299×299×322.9G1.31G162±7

The trained TOD models are smaller and faster compared with current machine vision backbones, which are also shown in Table 3. Faster model inference allows for a stronger focus on several image degradations. Compared with the classification of RGB images in the visible spectrum, the TOD models are applicable on single-channel data, and the four triangle classes are symmetric and balanced. Furthermore, the triangle shape and texture are independent of any spectral band, in contrast to many image databases in the visible band. This is crucial, e.g., for range performance assessment of imagers in several infrared spectral bands [long-wavelength infrared (LWIR), mid-wavelength infrared (MWIR), and short-wavelength infrared (SWIR)].

3.7.

Comparison with Other Image Quality Metrics

Different image quality metrics were proposed for assessment of methods for CE, such as absolute mean brightness error (AMBE),20 discrete entropy,12 measure of enhancement (EME) and EME based on entropy (EMEE),65 QRCM,66 UIQ,67 EBCM,68 and CII.29 A more detailed overview of further image quality metrics and methods for CE can be found in another work.69

However, most of these metrics were validated by subjective image quality assessments and may not correlate well with accuracies of models for TOD recognition or other machine vision tasks. To investigate some current nonreference metrics on images used in the evaluation of TOD models, these metrics were calculated for 64×64 images with a centered triangle superposed by backgrounds taken from OpenImages V6 scaled to different SNRbackground and impaired by Gaussian noise with different SNRnoise. In addition, these images were enhanced by three CE methods, CLAHE,13 EHS,17 and SUACE,28 which were among the top 10 algorithms in Fig. 7. In Fig. 13, values for nonreference image quality metrics EBCM,68 EME, EMEE,65 and entropy12 over SNRbackground are shown.

Fig. 13

Means of nonreference metric values Q0 (EBCM,68 EME,65 EMEE,65 and entropy,12 left column) and metric differences ΔQCE=QCEQ0 for three CE methods, i.e., CLAHE,13 EHS,17 and SUACE28 (right columns) over Nbackground=1000 images of 64×64  pixels with a centered triangle with circumradius r=10  pixel for different SNRbackground(dB) and SNRnoise(dB). ΔQCE=0 is shown as a horizontal blue dashed line. Example images with centered triangles superposed with background and impaired by Gaussian noise with SNRnoise=SNRbackground=3  dB (bottom), (left) without CE and (right) CE by the respective algorithm.

OE_62_4_048103_f013.png

It can be observed that metric values Q0 are high for low SNRbackground and low SNRnoise, representing high variances of background and Gaussian noise, respectively. The metric values Q0 decrease monotonically with increasing SNRbackground and SNRnoise. The only exception is EBCM for SNRnoise=20  dB, which we assume is due to the regularization of denominators in the algorithm. Therefore, low metric values represent conditions under which TOD accuracies are high. However, very similar metric values Q0 could also be observed when the triangle was omitted. This indicates that these metrics are mainly determined by background for triangles with r10  pixel. Thus, the metrics are weakly or not at all interrelated with the TOD task performance if the triangle is very small. CEs by the three CE methods generally lead to positive shifts of the metric values QCE or in other words ΔQCE=QCEQ0>0, which can be interpreted as predominant enhancement of the background and noise, which aggravates TOD recognition. Similar results were found for the evaluation of the full-reference metrics AMBE,20 CII,29 QRCM,66 and UIQ.67 In summary, the metrics can be meaningful for the assessment of CE, whereas they cannot provide insights if the CE is beneficial for TOD recognition and possibly other classification tasks with small targets.

4.

Conclusion

Accuracies of a sequential CNN model performing TOD discrimination were compared with respect to 25 different methods for CE. The background overlay was crucial because the accuracy is significantly impaired for high background variance and the CE algorithms strongly depend on it. Accuracy gains for low signal-to-background ratios SNRbackground<5  dB and a sufficiently large triangle r=10  pixel were shown. Model accuracies on images with randomly sampled triangle and degradation parameters revealed significant impairment by the investigated CE algorithms for a high SNRbackground and low triangle circumradius r. The strong fluctuations of accuracy differences highlight the difficulty in showing clear superiority of individual algorithms.

Models with increased resolution of the receptive field have shown decreasing accuracies, which may indicate that the growth of the number of model parameters was insufficient to represent the increasing range of triangle sizes. Another reason may be a higher number of background artifacts mimicking triangles. Larger images have more pixels. Therefore, their gray level distributions are statistically more stable. Hence, CE algorithms based on these gray level distributions should provide lower variations in the processed images and the associated accuracies. To prove this hypothesis, further investigations on larger receptive fields are required.

Variations of model size parameters, i.e., the number of filters N0, the growth factor g, the number of dense layers, and the activation function, have shown that the used default configuration is close to optimal based on the used model architecture and maximal values of degradation parameters used for the generation of training/validation data. Stronger degradations may require larger models for optimal accuracies.

The presented method can be used in an analogous way to assess the impact of other scene-based ADSP on military tasks. Moreover, the trained models can be used together with a test bed with an infrared scene projector for hardware in the loop testing of images including embedded ADSP. Finally, the methodology may be easily extended to more sophisticated classification tasks with real target signatures. In contrast to the triangle, real target signatures also have textures with spatial variations. Therefore, the gray level distribution and the CE based on it depend more strongly on the variations within the target, especially if the target covers lots of image pixels. Features related to these textures may require larger models compared with those investigated in this work.

References

1. 

P. Bijl and J. M. Valeton, “Triangle orientation discrimination: the alternative to minimum resolvable temperature difference and minimum resolvable contrast,” Opt. Eng., 37 (7), 1976 –1983 https://doi.org/10.1117/1.601904 (1998). Google Scholar

2. 

S. Keßler et al., “The European computer model for optronic system performance prediction (ECOMOS),” Proc. SPIE, 10433 282 –294 https://doi.org/10.1117/12.2262590 PSISDG 0277-786X (2017). Google Scholar

3. 

A. d’Acremont et al., “CNN-based target recognition and identification for infrared imaging in defense systems,” Sensors, 19 (9), 2040 https://doi.org/10.3390/s19092040 SNSRES 0746-9462 (2019). Google Scholar

4. 

Y. Shi et al., “Scalable compression for machine and human vision tasks via multi-branch shared module,” J. Electron. Imaging, 31 023014 https://doi.org/10.1117/1.JEI.31.2.023014 JEIME5 1017-9909 (2022). Google Scholar

5. 

Q. Wang, L. Shen and Y. Shi, “Recognition-driven compressed image generation using semantic-prior information,” IEEE Signal Process. Lett., 27 1150 –1154 https://doi.org/10.1109/LSP.2020.3004967 IESPEJ 1070-9908 (2020). Google Scholar

6. 

D. Wegner and E. Repasi, “Imager assessment by classification of geometric primitives,” Proc. SPIE, 11406 17 –25 https://doi.org/10.1117/12.2558572 PSISDG 0277-786X (2020). Google Scholar

7. 

D. Wegner and E. Repasi, “Image based performance analysis of thermal imagers,” Proc. SPIE, 9820 982016 https://doi.org/10.1117/12.2223629 PSISDG 0277-786X (2016). Google Scholar

8. 

A. Kuznetsova et al., “The Open Images Dataset V4: unified image classification, object detection, and visual relationship detection at scale,” Int. J. Comput. Vis., 128 1956 –1981 (2020). Google Scholar

9. 

S.-C. Huang, F.-C. Cheng and Y.-S. Chiu, “Efficient contrast enhancement using adaptive gamma correction with weighting distribution,” IEEE Trans. Image Process., 22 1032 –1041 https://doi.org/10.1109/TIP.2012.2226047 IIPRE4 1057-7149 (2013). Google Scholar

10. 

Y.-T. Kim, “Contrast enhancement using brightness preserving bi-histogram equalization,” IEEE Trans. Consum. Electron., 43 1 –8 https://doi.org/10.1109/30.580378 ITCEDA 0098-3063 (1997). Google Scholar

11. 

H. Ibrahim and N. Kong, “Brightness preserving dynamic histogram equalization for image contrast enhancement,” IEEE Trans. Consum. Electron., 53 1752 –1758 https://doi.org/10.1109/TCE.2007.4429280 ITCEDA 0098-3063 (2007). Google Scholar

12. 

C. Wang and Z. Ye, “Brightness preserving histogram equalization with maximum entropy: a variational perspective,” IEEE Trans. Consum. Electron., 51 1326 –1334 https://doi.org/10.1109/TCE.2005.1561863 ITCEDA 0098-3063 (2005). Google Scholar

13. 

K. Zuiderveld, “Graphics Gems IV,” Contrast Limited Adaptive Histogram Equalization, 474 –485 Academic Press Professional, Inc., San Diego, California (1994). Google Scholar

14. 

Z.-G. Wang, Z.-H. Liang and C.-L. Liu, “A real-time image processor with combining dynamic contrast ratio enhancement and inverse gamma correction for PDP,” Displays, 30 (3), 133 –139 https://doi.org/10.1016/j.displa.2009.03.006 DISPDP 0141-9382 (2009). Google Scholar

15. 

J. Tang, E. Peli and S. Acton, “Image enhancement using a contrast measure in the compressed domain,” IEEE Signal Process. Lett., 10 289 –292 https://doi.org/10.1109/LSP.2003.817178 IESPEJ 1070-9908 (2003). Google Scholar

16. 

Y. Wang, Q. Chen and B. Zhang, “Image enhancement based on equal area dualistic sub-image histogram equalization method,” IEEE Trans. Consum. Electron., 45 68 –75 https://doi.org/10.1109/30.754419 ITCEDA 0098-3063 (1999). Google Scholar

17. 

D. Coltuc, P. Bolon and J.-M. Chassery, “Exact histogram specification,” IEEE Trans. Image Process., 15 1143 –1152 https://doi.org/10.1109/TIP.2005.864170 IIPRE4 1057-7149 (2006). Google Scholar

18. 

S. F. Tan and N. A. M. Isa, “Exposure based multi-histogram equalization contrast enhancement for non-uniform illumination images,” IEEE Access, 7 70842 –70861 https://doi.org/10.1109/ACCESS.2019.2918557 (2019). Google Scholar

19. 

C. Wang, J. Peng and Z. Ye, “Flattest histogram specification with accurate brightness preservation,” IET Image Process., 2 249 –262 https://doi.org/10.1049/iet-ipr:20070198 (2008). Google Scholar

20. 

S.-D. Chen and A. Ramli, “Minimum mean brightness error bi-histogram equalization in contrast enhancement,” IEEE Trans. Consum. Electron., 49 1310 –1319 https://doi.org/10.1109/TCE.2003.1261234 ITCEDA 0098-3063 (2003). Google Scholar

21. 

K. Singh and R. Kapoor, “Image enhancement via median-mean based sub-image-clipped histogram equalization,” Optik, 125 (17), 4646 –4651 https://doi.org/10.1016/j.ijleo.2014.04.093 OTIKAJ 0030-4026 (2014). Google Scholar

22. 

K. Wongsritong et al., “Contrast enhancement using multipeak histogram equalization with brightness preserving,” in IEEE Asia-Pacific Conf. Circuits and Syst., 1998. IEEE APCCAS 1998, 455 –458 (1998). https://doi.org/10.1109/APCCAS.1998.743808 Google Scholar

23. 

S. Poddar et al., “Non-parametric modified histogram equalisation for contrast enhancement,” IET Image Process., 7 641 –652 https://doi.org/10.1049/iet-ipr.2012.0507 (2013). Google Scholar

24. 

C. H. Ooi and N. A. M. Isa, “Adaptive contrast enhancement methods with brightness preserving,” IEEE Trans. Consum. Electron., 56 2543 –2551 https://doi.org/10.1109/TCE.2010.5681139 ITCEDA 0098-3063 (2010). Google Scholar

25. 

S.-D. Chen and A. Ramli, “Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation,” IEEE Trans. Consum. Electron., 49 1301 –1309 https://doi.org/10.1109/TCE.2003.1261233 ITCEDA 0098-3063 (2003). Google Scholar

26. 

K. Sim, C. Tso and Y. Tan, “Recursive sub-image histogram equalization applied to gray scale images,” Pattern Recognit. Lett., 28 (10), 1209 –1221 https://doi.org/10.1016/j.patrec.2007.02.003 PRLEDG 0167-8655 (2007). Google Scholar

27. 

M. Kim and M. Chung, “Recursively separated and weighted histogram equalization for brightness preservation and contrast enhancement,” IEEE Trans. Consum. Electron., 54 1389 –1397 https://doi.org/10.1109/TCE.2008.4637632 ITCEDA 0098-3063 (2008). Google Scholar

28. 

A. M. R. R. Bandara, K. A. S. H. Kulathilake and P. W. G. R. M. P. B. Giragama, “Super-efficient spatially adaptive contrast enhancement algorithm for superficial vein imaging,” in IEEE Int. Conf. Ind. and Inf. Syst. (ICIIS), 1 –6 (2017). https://doi.org/10.1109/ICIINFS.2017.8300427 Google Scholar

29. 

F. Bulut, “Low dynamic range histogram equalization (LDR-HE) via quantized Haar wavelet transform,” Visual Comput., 38 (6), 2239 –2255 https://doi.org/10.1007/s00371-021-02281-5 VICOE5 0178-2789 (2022). Google Scholar

30. 

J. D. Fahnestock and R. A. Schowengerdt, “Spatially variant contrast enhancement using local range modification,” Opt. Eng., 22 (3), 223378 https://doi.org/10.1117/12.7973124 (1983). Google Scholar

31. 

P.-C. Wu, F.-C. Cheng and Y.-K. Chen, “A weighting mean-separated sub-histogram equalization for contrast enhancement,” in Int. Conf. Biomed. Eng. and Comput. Sci. (ICBECS), 1 –4 (2010). https://doi.org/10.1109/ICBECS.2010.5462511 Google Scholar

32. 

D. Wegner and S. Keßler, “Comparison of algorithms for contrast enhancement based on TOD assessments by convolutional neural networks,” Proc. SPIE, 12271 122710H https://doi.org/10.1117/12.2638539 PSISDG 0277-786X (2022). Google Scholar

33. 

J. Park et al., “Distort-and-recover: color enhancement using deep reinforcement learning,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit. (CVPR), (2018). https://doi.org/10.1109/CVPR.2018.00621 Google Scholar

34. 

B. Xiao et al., “Histogram learning in image contrast enhancement,” in IEEE/CVF Conf. Comput. Vis. and Pattern Recognit. Workshops (CVPRW), 1880 –1889 (2019). https://doi.org/10.1109/CVPRW.2019.00239 Google Scholar

35. 

G. F. C. Campos et al., “Machine learning hyperparameter selection for contrast limited adaptive histogram equalization,” EURASIP J. Image Video Process., 2019 (1), 59 https://doi.org/10.1186/s13640-019-0445-4 (2019). Google Scholar

36. 

Y.-G. Shin et al., “Unsupervised deep contrast enhancement with power constraint for OLED displays,” IEEE Trans. Image Process., 29 2834 –2844 https://doi.org/10.1109/TIP.2019.2953352 IIPRE4 1057-7149 (2020). Google Scholar

37. 

V. Bychkovsky et al., “Learning photographic global tonal adjustment with a database of input/output image pairs,” in CVPR, (2022). https://doi.org/10.1109/CVPR.2011.5995332 Google Scholar

38. 

S. Lombardi and K. Nishino, “Reflectance and illumination recovery in the wild,” IEEE Trans. Pattern Anal. Mach. Intell., 38 (1), 129 –141 https://doi.org/10.1109/TPAMI.2015.2430318 ITPIDJ 0162-8828 (2016). Google Scholar

39. 

H. Yue et al., “Contrast enhancement based on intrinsic image decomposition,” IEEE Trans. Image Process., 26 (8), 3981 –3994 https://doi.org/10.1109/TIP.2017.2703078 IIPRE4 1057-7149 (2017). Google Scholar

40. 

G. B. Airy, “On the diffraction of an object-glass with circular aperture,” Trans. Cambridge Philos. Soc., 5 283 TCPSAE 0371-5779 (1835). Google Scholar

41. 

G. C. Holst, Electro-Optical Imaging System Performance, 6th ed.JCD Publishing( (2017). Google Scholar

42. 

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” (2014). Google Scholar

43. 

K. He et al., “Delving deep into rectifiers: surpassing human-level performance on ImageNet classification,” (2015). Google Scholar

44. 

S. Ioffe, C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proc. 32nd Int. Conf. Mach. Learn., 448 –456 (2015). Google Scholar

45. 

T. Salimans and D. P. Kingma, “Weight normalization: a simple reparameterization to accelerate training of deep neural networks,” (2016). Google Scholar

46. 

A. Brock et al., “High-performance large-scale image recognition without normalization,” (2021). Google Scholar

47. 

A. Yazdanbakhsh et al., “An evaluation of edge TPU accelerators for convolutional neural networks,” (2021). Google Scholar

48. 

M. Everingham et al., “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., 88 (2), 303 –338 https://doi.org/10.1007/s11263-009-0275-4 IJCVEQ 0920-5691 (2010). Google Scholar

49. 

O. Russakovsky et al., “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis., 115 (3), 211 –252 https://doi.org/10.1007/s11263-015-0816-y IJCVEQ 0920-5691 (2015). Google Scholar

51. 

A. Khosla et al., “Novel dataset for fine-grained image categorization,” in First Workshop on Fine-Grained Visual Categorization, IEEE Conf. Comput. Vis. and Pattern Recognit., (2011). Google Scholar

52. 

M.-E. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes,” in Indian Conf. Comput. Vis., Graph. and Image Process., (2008). https://doi.org/10.1109/ICVGIP.2008.47 Google Scholar

53. 

F.-F. Li et al., “Caltech 101,” (2022). Google Scholar

54. 

A. Maas, A. Hannun and A. Ng, “Rectifer nonlinearities improve neural network acoustic models,” (2013). Google Scholar

55. 

D. Clevert, T. Unterthiner and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUs),” (2015). Google Scholar

56. 

D. Hendrycks and K. Gimpel, “Bridging nonlinearities and stochastic regularizers with Gaussian error linear units,” (2016). Google Scholar

57. 

G. Klambauer et al., “Self-normalizing neural networks,” (2017). Google Scholar

58. 

F. Agostinelli et al., “Learning activation functions to improve deep neural networks,” (2014). Google Scholar

60. 

P. Ramachandran, B. Zoph and Q. V. Le, “Searching for activation functions,” (2017). Google Scholar

61. 

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” (2014). Google Scholar

62. 

K. He et al., “Identity mappings in deep residual networks,” (2016). Google Scholar

63. 

C. Szegedy et al., “Rethinking the inception architecture for computer vision,” (2015). Google Scholar

64. 

F. Chollet, “Xception: deep learning with depthwise separable convolutions,” (2016). Google Scholar

65. 

S. Agaian, B. Silver and K. Panetta, “Transform coefficient histogram-based image enhancement algorithms using contrast entropy,” IEEE Trans. Image Process., 16 741 –758 https://doi.org/10.1109/TIP.2006.888338 IIPRE4 1057-7149 (2007). Google Scholar

66. 

T. Celik, “Spatial mutual information and pagerank-based contrast enhancement and quality-aware relative contrast measure,” IEEE Trans. Image Process., 25 (10), 4719 –4728 https://doi.org/10.1109/TIP.2016.2599103 IIPRE4 1057-7149 (2016). Google Scholar

67. 

Z. Wang and A. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., 9 81 –84 https://doi.org/10.1109/97.995823 IESPEJ 1070-9908 (2002). Google Scholar

68. 

B. N. Anoop, P. E. Ameenudeen and J. Joseph, “A meta-analysis of contrast measures used for the performance evaluation of histogram equalization based image enhancement techniques,” in 9th Int. Conf. Comput., Commun. and Netw. Technol. (ICCCNT), 1 –6 (2018). https://doi.org/10.1109/ICCCNT.2018.8494069 Google Scholar

69. 

S. V. Renuka, D. R. Edla and J. Joseph, “An objective measure for assessing the quality of contrast enhancement on magnetic resonance images,” J. King Saud Univ. – Comput. Inf. Sci., 34 (10, Part B), 9732 –9744 https://doi.org/10.1016/j.jksuci.2021.12.005 (2021). Google Scholar

Biography

Daniel Wegner is a research assistant at the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (IOSB) in Ettlingen, Germany. He received his diploma degree in physics (equivalent to MS degree) from the Karlsruhe Institute of Technology (KIT), Germany, in 2013. Then, he received his PhD in physics from KIT in 2022. His research interests include image quality metrics, methods for contrast enhancement, image-based simulation of atmospheric turbulence as well as approaches for modeling, image-based simulation, and machine learning for imager performance assessment.

Stefan Keßler is the head of the Sensor Simulation Group at the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation in Ettlingen, Germany. He received his diploma degree in physics from the University of Heidelberg in 2008 and his PhD in physics from the University of Erlangen-Nürnberg in 2014. His research activities comprise sensor modeling, image simulation of infrared and electro-optical imagers, and imager performance assessment.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Daniel Wegner and Stefan Keßler "Comparison of algorithms for contrast enhancement based on triangle orientation discrimination assessments by convolutional neural networks," Optical Engineering 62(4), 048103 (20 April 2023). https://doi.org/10.1117/1.OE.62.4.048103
Received: 8 December 2022; Accepted: 24 March 2023; Published: 20 April 2023
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Signal to noise ratio

Education and training

Data modeling

Detection and tracking algorithms

Image enhancement

Databases

RGB color model

RELATED CONTENT


Back to Top