With the continuous development of modern science and technology, aerospace, aviation, and image sensor technologies are being constantly improved, and various new sensors and remote sensing platforms are emerging. Currently, the ability to obtain remote sensing data is improving continuously, and remote sensing data products are characterized by multiple spatial resolutions and multiple loads. Applications based on multi-source data fusion will replace single-source data applications in the future. The prerequisite for fusing data from different sources is that the spatial references for these data must be consistent. Therefore, it is imperative to conduct research on the methods for high-precision automatic registration of images from different sources in order to solve the problem of difficult registration of such images caused by different imaging mechanisms and effects. Taking the registration of optical and SAR images from different sources as an example, this paper presents a method of registering optical and SAR images based on bidirectional style transfer and hybrid feature descriptor. Firstly, a bidirectional style transfer network is used to convert the original optical and SAR images to pseudooptical and SAR images, thus achieving the homogenization of images from different sources. Subsequently, feature sets and hybrid feature descriptors are extracted from optical and SAR images, and the relationship between images from different sources in terms of feature matching is established to achieve high-precision automatic registration of optical and SAR images. The experimental results show that the method proposed in this paper is superior to traditional image registration methods in terms of both subjective perception and objective indicators such as RMSE and has achieved better registration results.
The detection of small objects by oriented bounding box in aerial images is a recent hot topic. However, since the aerial images are not collected at the same height, the Ground Sample Distance (GSD) is different for each image, so that small objects are easily overlooked. Existing algorithms are designed for multi-scale object detection, and feature fusion is time-consuming, resulting in a large amount of model parameters that is not easy to deploy on embedded devices. We propose three methods to address the above problems. First, we scale the collected aerial images to the same scale according to the GSD value. Second, we change the structure of Feature Pyramid Network (FPN) and only keep the necessary low-level feature maps. Finally, we rescale the anchor for the specific scene. We validate our proposed method on the DOTA dataset. The results show that the modified model using our method can identify more small-scale objects, and the maximum number of model parameters can be reduced by 2.7%, the inference speed can be increased by 13.24%, and the model size was reduced by up to 28% when the detection accuracy is the same as the original algorithm.
The surface defects of industrial materials can seriously affect product quality, so that the industry has the demand for high precision defect detection algorithms. Towards this issue, we investigate the methods of defect detection accuracy improvement and collaborative training. This paper first innovatively proposes the multi-attention fusion mechanism (MAF), which integrates both channel and space dimensions, and embeds the spatial pyramid structure into the attention module. It alleviates the problem of inconspicuous defective features and enhances the feature extraction ability. Secondly, this paper proposes the mixForm data augmentation algorithm to transform the target defects in space and shape to tackle the problem of few samples. The detection model's ability to recognize defects of multiple types and small objects is simultaneously improved. Thirdly, the split federated learning (SFL) framework enables collaborative training of industrial surface defect detection models with a low resource cost. Our scheme improves model training efficiency and achieves high accuracy detection for small amounts of defect samples. Finally, the experimental results show that MAF with the aid of mixForm achieves 82.91 mAP on the NEU-DET dataset. Using MAF, the defect detection algorithm achieves at least 1.89 mAP improvement over using other attention mechanisms. The experiments also demonstrate that SFL achieves faster convergence and higher detection performance than traditional federated learning approaches.
An increasing number of applications require land cover information from remote sensing images, thereby resulting in an urgent demand for automatic land use and land cover classification. Therefore, effectively improving the accuracy of land cover classification is a main objective in remote sensing image processing. We propose a land cover classification postprocessing framework based on iterative self-adaptive superpixel segmentation (LCPP-ISSS) for remote sensing image data. This framework can further optimize the land cover classification results obtained by neural networks without changing the network structure. First, we propose the iterative self-adaptive superpixel segmentation algorithm for high-resolution remote sensing images to extract the boundary information of different land cover classes. Then, we propose a land cover classification result optimization method based on patch complexity to optimize the classification result by combining the boundary information with the semantic information. In an experiment, we compare the classification accuracy before and after using LCPP-ISSS and with other common methods. The results show that LCPP-ISSS outperforms the dense conditional random field and provides a 4% increase in the mean intersection over union and a 10% increase in overall accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.