In recent times, target recognition techniques based on deep learning in the optical domain have exhibited impressive performance. Because of these promising results, there has been a surge in research centered around deep learning in the field of Synthetic Aperture Radar (SAR) target recognition. Most of the contemporary studies directly adopt or modify the deep learning model structures used in optical image target recognition. A primary limitation of this approach is the large amount of data required for training. However, collecting real SAR data entails significant time and cost, and the availability of publicly accessible SAR target datasets are also insufficient. As a solution, studies have been undertaken to generate synthetic SAR data using CAD models and electromagnetic simulations. Yet, a discrepancy in recognition performance emerges due to domain differences between synthetic and real data, especially variations in speckle intensity and side-lobes. In this paper, we propose a novel domain randomization technique to mitigate these inter-domain disparities. Utilizing adversarial generative networks as a foundation, we preserve the core characteristics of SAR targets while minimizing domain differences, applying random transformations to extraneous elements (e.g., Clutter, Speckle). Through this method, we can diversify a single SAR data into various data, effectively augmenting the dataset. This considerably enhances the recognition performance and robustness of deep learning-based target recognitions models.
|