PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13032, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For multi-pass interferometric synthetic aperture radar applications, phase incoherencies between data collected during successive passes cause errors in the processed results. In topological mapping, where data at several elevations are interferometrically processed to resolve a height map of the scene, phase incoherencies translate to height errors. For digital beamforming arrays, phase incoherencies result in poor beamforming. This includes high sidelobes, inaccurate steering angle, and poor spatial resolution. Therefore, to maximize sensing performance and ensure accurate measurements, it is necessary to phase calibrate these arrays. By treating each data collection location in the multi-pass configuration like an element of a digital beamforming array, similar calibration techniques can be used to, analogously, phase calibrate the synthetic aperture radar (SAR) data stack in interferometric SAR (IFSAR) processing. In this work, three data-driven phase calibration techniques are demonstrated on simulated and measured (Gotcha Volumetric SAR) data stacks, with the resulting height maps showing improved focusing of the scatterers in elevation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high sample density and rich feature set of 3D SAR makes high performance target recognition possible even in noisy environments. The recognition performance with 3D imaging is examined for full target templates and for feature decompositions of the target. Spherical SAR with di§erent Fourier-based focusing across di§erent frequencies provides 3D image volumes that can be further segmented to reveal component surfaces of the target. The sparse clustering of distinct target components and the high adjacent voxel density in these cluster add to increase performance in 3D object understanding. Robust recognition depends on how well these templates and features perform across di§erent noise levels. Target templates consist of multiple orthogonal views of the target. The components in each view can then be further segmented into points of high reáection (point scatterers), like-oriented surfaces, edge or surface clusters, or even di§erent surface roughness components. Recognition then consists of either a whole target template or a feature aggregated target model which are both similar to the noise-free image data but di§erent enough from the image data of dissimilar targets. This approach leads to more heuristic recognition algorithms but also leads to a more detailed understanding of the target and its most distinct features. This may lead to better target understanding, as well as, an ability to identify any feature variations from the originally imaged target model. Alternative recognition approaches or even this approach may be automated to apply to more generic 3D imaged objects, but key to a more detailed understanding is the ability to determine why or what features led to the identiÖcation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-to-image translation methods aim to convert an image from its native source domain to a target domain. This is a common technique when the target domain’s phenomenology is more amenable to a certain task than the source domain. An example of this practice is Synthetic Aperture Radar (SAR) to electro-optical (EO) translation for 3D reconstruction. Techniques in 3D reconstruction have been shown to be effective on EO imagery. A common practice is to translate SAR imagery to the EO domain in order to form 3D reconstructions from SAR imagery. The translation algorithms ultimately map specular SAR responses to diffuse EO responses. While previous work supports the effectiveness of deep neural networks for such a translation, the black-box nature of the trained models does not offer explainability towards the effectiveness of the SAR to EO translations. This work aims to offer explainability for SAR to EO translations via direct comparison of facet responses found in ray-tracing based simulations given equivalent target and sensor geometry. Further analysis of these target responses is conducted in order to understand scenarios where SAR to EO translations is expected to be effective and ineffective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reconstructing 3D data of objects from limited SAR imagery is of interest due to SARs ability to actively sense targets from a far stand-off range. SAR imagery is non-literal and may not capture the same features as a passive EO camera. However, EO imagery has been shown to be a promising candidate for low-view 3D reconstruction. Thus, a common technique for SAR 3D reconstruction is to first translate a SAR image to an EO image. The structural similarity (SSIM) metric has been shown to be an effective loss function in the techniques used to translate SAR to EO. However, SSIM has several components that can be tuned to achieve optimal performance. This work addresses (i) the parameterization of SSIM for the SAR to EO translation problem and (ii) the ability to reconstruct 3D objects from SAR images after said translations. A parametric sweep is conducted to find optimal parameterization on several matched SAR and EO datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of airborne synthetic aperture radar (SAR) measurement techniques, 3D SAR image formation has become prevalent in the SAR community. Conventional backprojection algorithms have difficulty mapping scatters to a voxel grid in 3D space due to a myriad of differential ranges in the height dimension. The inaccurate mapping of range profiles is commonly seen in layover defects. To address this issue while using limited sensor aspects, this study utilizes tomographic techniques. Interferometric SAR is leveraged to yield height estimate surfaces and applied to the 3D backprojection images as a spatial filter. Fusion across a swath of aspects in azimuth for a fixed elevation bin are utilized to resolve shadowed and non-resolved features in the surface reconstructions. Height estimations are applied to the 3D image grid corresponding to the range and cross-range voxels. Multiple height estimate algorithms are studied and yield results on a feature level basis of targets accurate within inches for X-Band synthetically generated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The volumetric scattering of radar waves allows for three-dimensional SAR imaging of a scene. As is the case in two-dimensions, traditional Fourier methods are easier to implement, but have limitations in terms of quality with respect to speckle, scintillation, and side lobe artifacts. Other methods that have shown promise, such as superresolution methods like the Minimum Variance Method (MVM) and the Multiple Signal Classification (MUSIC) algorithm, have been shown to produce high quality images. However, these algorithms are computationally intense as they require the estimation of a correlation matrix, and manipulations thereof, as well as computing the image spectrum through computation of a quadratic form for each image pixel. This paper presents an efficient method for computing these superresolution techniques for 3D SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The signatures of moving targets with synthetic aperture radar (SAR) imagery are typically smeared primarily in the radar cross-range direction. For such endoclutter targets, the time interval between adjacent radar pulses along the synthetic aperture is sufficiently small that the collection of the target signature does not experience aliasing. If a mobile target is moving sufficiently fast during the SAR collection interval, then the signature exhibits a smearing that lies along a diagonal in the SAR image space of down-range versus cross-range. In the present analysis, the properties of such diagonal signatures are investigated for constant velocity targets. The research further includes the development of new and advanced mathematics and algorithms that yield finely refocused SAR imagery for automatic detection and recognition, even for relatively fast exoclutter targets with non-uniform rotation. This proposal develops and applies two algorithms to detect and refocus fast-moving exoclutter surface targets within SAR imagery: (1) Rapid Exoclutter Focus Transformation (REFT) algorithm to transform input SAR imagery into a form conducive for the detection of fast-moving exoclutter targets, and (2) Arbitrary Rigid Object Motion Autofocus (AROMA) algorithm for automatic focus of moving targets with non-uniform rotation and translation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extensive work has been published on theoretical methods for ground moving target indication (GMTI) in synthetic aperture radar (SAR) images. The primary challenge to this problem is that SAR imaging inherently assumes a stationary scene in order to allow a long coherent processing interval, and thus moving targets that violate that assumption may be difficult to reliably detect. Recent work in this area has benefited from experiments with measured data sets, that are of high-quality but also include the measurement imperfections innate to any measured data. In many instances, sophisticated SAR-GMTI techniques have been brought to bear without necessarily employing well-known bootstrapping methods for data calibration and error correction. This leads to comparisons with algorithm baselines that are not reflective of the state of the art and to performance analysis with new algorithms that may be inaccurate or even pessimistic due to the presence of unresolved measurement errors. In this paper, we show that straightforward methods of SAR data calibration allows high-quality SAR-GMTI images to be achieved from measured data via simplistic clutter cancellation and along-track interferometry. These simple steps may serve as the basis for the testing of more advanced algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Superresolution methods of spectral estimation have shown much promise for creating high quality SAR images that are devoid of the inherent shortcomings of classical Fourier techniques. However, many modern spectral estimation techniques involve the estimation of a correlation matrix and manipulations thereof, both of which can be prohibitively expensive. While efficient methods of computing these spectral estimation techniques have been explored in the literature, for large scenes with high resolution, it is often still too expensive to compute and store the data necessary for the computations. These techniques thus have had limited application in the vast array of SAR imaging problems. This paper presents several methods for generating accurate approximations of these spectral estimation techniques while further improving the computational efficiency and reducing the storage burden.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the two-pass coherent change detection problem for SAR imaging. Inspired by classical maximum likelihood-based coherent change detectors (Jakowatz, 1996) and multi-polarization SAR change detection techniques (Novak, 2005), we propose a method of incorporating underlying structural image information using specially formulated kernels. In particular, we utilize a class of convolutional edge detection kernels to extract underlying edge information in the scene of interest given noisy and potentially incomplete data. We then adapt existing multi-polarization SAR change detection methods to incorporate such edge information to improve the quality and robustness of resulting change maps. We validate the proposed method using real-world SAR images from the CCD Challenge Problem dataset and demonstrate improved change detection performance using empirical ROC studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a new approach to non-coherent change detection for high resolution polarimetric synthetic aperture radar (polSAR) exploitation. In the high resolution setting, the reduced size of a resolution cell diminishes the applicability of central limit theorem arguments that lead to the traditional Gaussian backscatter models that underpin existing polSAR change detection algorithms. To mitigate this, we introduce a new model for polSAR data that combines generalized Gamma (GΓ) distributed marginals within a copula framework to capture the correlation dependency between multiple polSAR channels. Using the GΓ-copula model, a generalized likelihood ratio test (GLRT) is derived for detecting changes within high resolution polSAR imagery. Examples using measured data demonstrate the non-Gaussian nature of high resolution polSAR data and quantify a performance improvement when using the proposed GΓ-copula change detection framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic data is commonly used to assess the performance of Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems modeling the OC space in question. In this work we demonstrate that the use of an informed sampling technique compared to an uninformed sampling approach can efficiently assess the “OC gap” between train and test OC spaces as the gap narrows. To demonstrate the effectiveness of an informed sampling approach, SAR ATR experiments are conducted as a function of how representative the train distribution of OCs are compared to the test OC space given a variety of challenging OC scenarios. Algorithm performance is assessed over a series of experiments given discrepancies between azimuth and depression angle of the sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) performance models (PMs) help engineers and scientists understand the effectiveness of their algorithms as a function of target, environment, and sensor conditions (also called operating conditions or OCs). Traditional approaches typically leverage handcrafted models and rely on subject matter expertise to model the OC-performance relationship using limited amounts of real-world OC data. Recent advances in synthetic data modeling have has led to improved access to high-quality synthetic sensor data. This motivates the consideration of more data-driven PM approaches. In this work, we adopt a probabilistic classification-based framework for ATR performance modeling and explore the use of generic classifiers to predict ATR performance using ATR experiments performed with synthetic data. We leverage results from prior SAR ATR studies to examine the accuracy and calibration performance for regularized logistic regression, multilayer perceptrons, random forests, and Gaussian process classifiers in the performance modeling context. We also use our experiments to make observations regarding the use of these classifiers for performance modeling based on their unique characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the differences in the statistical distributions of synthetic versus measured synthetic aperture (SAR) images, it is difficult to train a deep learning model on synthetic images to accurately classify measured images. This research utilizes the enormous computing power required to train foundational models. and approaches the problem from a transfer learning perspective. Since foundational models have been trained on 10’s of billions of images, they have feature extraction capabilities far beyond what is possible with standard computational restrictions and greatly reduced data availability. Therefore, we utilize the foundational model’s feature extraction capabilities and transfer them to the synthetic-measured gap problem. The hypothesis is that the very rich features resulting from the foundational models trained almost exclusively on EO images can be transferred to the SAR classification problem using synthetic SAR data for training while minimizing the need for measured SAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-sensor fusion algorithms combine information from different sensors to exceed the performance of a single sensor for a given task. In this work, we focus on fusing imagery from electro-optical (EO) and synthetic aperture radar (SAR) sensors for target identification. In addition to the imagery itself, large amounts of metadata or “side information” may be available as well. This data can include important characteristics about the operating conditions (OCs) under which the images were taken. On its own, this metadata is not useful for target identification. However, this extra information can potentially be leveraged to learn better representations of EO and SAR images and improve classification performance. In this work, we assume that side information is available only during training and leverage this information to build contextual deep representations of the target classes. At test time, we fuse the EO and SAR representations to classify the input images without accessing the metadata. We examine the impact of these OC-aware target representations on fusion performance under various forms of OC mismatch between training and testing and show that fusing models trained with side information improves classification accuracy when compared to classifiers trained without side information, especially under more significant train/test OC shifts. We also observe that the inclusion of side information may reduce the trained network’s capacity, which implies that side information introduces a regularizing effect. To further study this effect, we empirically compare our approach to classifiers trained with weight decay and bottleneck layers and find that our approach achieves higher accuracy, implying that the inclusion of side information has additional impacts on the learned representations beyond simple regularization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given the limited amount of measured synthetic aperture radar data available to train object recognition algorithms. Synthetic data is used to train the algorithm while using measured data to test. To account for the variability of measured data and to ensure robustness to various conditions, extensive physics- based augmentations are used during the training process. These augmentations include target, background, and sensor variability. In order to explore the augmentation space most efficiently, the background and sensor variability are explored on-line during the training process using an adversarial learning strategy. Performance trades are reported as a function of the various augmentation strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent times, target recognition techniques based on deep learning in the optical domain have exhibited impressive performance. Because of these promising results, there has been a surge in research centered around deep learning in the field of Synthetic Aperture Radar (SAR) target recognition. Most of the contemporary studies directly adopt or modify the deep learning model structures used in optical image target recognition. A primary limitation of this approach is the large amount of data required for training. However, collecting real SAR data entails significant time and cost, and the availability of publicly accessible SAR target datasets are also insufficient. As a solution, studies have been undertaken to generate synthetic SAR data using CAD models and electromagnetic simulations. Yet, a discrepancy in recognition performance emerges due to domain differences between synthetic and real data, especially variations in speckle intensity and side-lobes. In this paper, we propose a novel domain randomization technique to mitigate these inter-domain disparities. Utilizing adversarial generative networks as a foundation, we preserve the core characteristics of SAR targets while minimizing domain differences, applying random transformations to extraneous elements (e.g., Clutter, Speckle). Through this method, we can diversify a single SAR data into various data, effectively augmenting the dataset. This considerably enhances the recognition performance and robustness of deep learning-based target recognitions models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the use of colorization as a data augmentation and its applications in bridging the synthetic-measured gap. A current problem in Synthetic Aperture Radar Automatic Target Recognition (SAR ATR) is training deep learning networks on largely synthetic data and transferring the knowledge to the measured domain. Data augmentations, such as colorization, can make the deep learning models more robust to the shift in domain when used during training, leading to improved performance over traditional synthetic data. Our approach utilizes a lossless colorization augmentation and applies it to various ResNet-based architectures1 to improve the SAR ATR performance when trained on limited measured data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A typical assumption for deploying machine learning models is that the model training and inference data were drawn from the same distribution. However, this assumption rarely holds true for systems deployed in the open world. Inference data can drift over time for numerous reasons, such as changes in operating conditions, adversarial modifications to targets, or sensor degradation. Despite these changes, deep learning models are especially vulnerable to issuing over-confident predictions on out-of-distribution data. This work seeks to address this issue by proposing a framework for describing out-of-distribution detection pipelines, proposing an out-of-distribution detection algorithm using Gaussian Mixture Models which is well suited for SAR ATR, and by evaluating multiple pipelines which exploit the intermediate states of ATR model deep neural networks. This work studies candidate pipelines with varied amounts of dimensionality reduction and detection algorithms on the SAMPLE+ dataset challenge problems for clutter and confuser rejection. Despite the exclusion of out-of-distribution samples from pipeline training, the presented results demonstrate that these samples can nonetheless be reliably detected, exceeding baseline performance by more than 10 percentage points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent studies, Graph Neural Network (GNN) has been shown to be vulnerable to various adversarial attacks in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). Due to its important roles in military fields, the robustness of the GNN model raises severe security concerns. In this work, we propose a Graph Contrastive Learning based Adversarial Training for SAR Image Classification. By training the model with adversarial samples generated from Projected Gradient Descent Attack during the Contrastive Learning step, we demonstrate that our model can smooth the representation space and suppress the distortion caused by adversarial attacks, thus making better predictions. By formulating the problem as a multi-objective optimization task, our model achieves 98.1% accuracy on clean samples 85.2% accuracy on adversarial samples, both outperforming state-of-the-art models on the MSTAR dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When performing automatic target recognition it is common to train models using synthetically generated data. This is because synthetically generated data is plentiful, and cheap to produce. Once trained on synthetic data machine learning models are often testing on measured or real-world SAR images. These models do not perform as well when analyzing measured SAR images. This problem is known as the synthetic-measured gap. In this work we explore training generative and contrastive models to close this gap. We train our models on synthetically generated data with the goal of being able to classify measured SAR images. We utilize segmentation masks as well fully-formed SAR images. In the generative approach we explore using an auto-encoder to generate segmentation masks of input SAR images. The auto-encoders architecture includes a classifier which is trained using shared features between the raw image and the segmentation mask. This model is capable of generating a segmentation mask from a SAR image. The contrastive approach uses the Sim-Siam architecture, which utilizes segmentation masks and SAR images. The contrastive model makes a classification decision, by learning features that are shared between the two input types. The goal of this work is to improve classification performance when training on synthetic data, and evaluating on measured data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional synthetic aperture radar processes all of the available k-space data to form a single 2-D image. While this does yield the finest resolution image possible, it implicitly assumes that imaged scatterers lie within the chosen image plane and that their responses are isotropic over the observed k-space. These assumptions neglect out-of-plane height, which can lead to pixel phase and layover variation over the extent of an aperture, and other anisotropic scattering behaviors expected of non-point responses. The averaging process of image formation may therefore be destroying or obscuring data richness that is not easily recovered in later processing. In this paper, we show that subaperture processing of SAR data permits anisotropic scattering behavior, such as out-of-plane height, to be implicitly encoded in color channels, and through a few suggested approaches, we seek to improve image interpretability for humans and machine learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) images can have broad dynamic range depending on the responses of the content of the observed scenes. This can vary from intense specular responses from manmade objects to low/no-return areas due to shadowing or forward scatter off electromagnetically smooth surfaces. To display SAR images for human consumption or to input them modern recognition algorithms, one must usually adopt a finite precision display. This is typically no more than 8 bits of quantization, but in some instances, utilization of fewer bits may present operational advantages. This paper explores three distinct techniques for SAR image quantization and provides examples with measured imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning is a technology that has proven extremely effective at addressing difficult problems in the field of computer vision. Much of the recent research in Automatic Target Recognition has leveraged deep learning classifiers, which perform well in closed set problems but lack robustness in open set problems. Moreover, deep learning classifiers generally lack confidence estimates that accurately reflect their performance. This paper demonstrates recent research on calibrating confidence measures of deep learners on both closed and open set problems in a transfer learning setting where only a small amount of measured data is used during training and calibration. Furthermore, the calibrated confidences are used to generate statistically rigorous prediction sets, which include the true target at a user-defined error rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of limitations in availability of synthetic aperture radar (SAR) training data, automatic target recognition (ATR) researchers have turned to the use of synthetic SAR images. Unfortunately, training neural network classifiers on this synthetic data does not yield robust models. Assuming access to limited measured SAR data, we evaluate two natural, transfer-learning approaches to solve this problem, showing that both do not successfully lead to solutions. Motivated by the successes of contrastive, representation, and metric learning, we propose a novel graph-based pretraining approach to transfer knowledge from synthetic samples to real-world scenarios. We show that this approach is applicable to three different neural network architectures obtaining improvements over the baseline approach of 19.21%, 28.70%, and 8.27% respectively. We also demonstrate that our method is robust to the choice of hyperparameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.