Anchor-free aerial object detectors have recently attracted considerable attention due to their high flexibility and computational efficiency. They are typically implemented by learning two subtasks of object detection, object localization and classification, based on two separately parallel branches in the detection head. However, without the constraints of predefined anchor boxes, anchor-free detectors are more vulnerable to spatial misalignment caused by optimization inconsistencies between these two subtasks, which significantly degrades detection performance. To address this issue, this paper proposes a novel and efficient anchor-free object detector, namely localization-classification-aligned detector (LCA-Det), which explicitly pulls closer the predictions of localization and classification, through a single-branch subtask-aligned detection head and a subtask-aligned sample assignment metric. Extensive experimental results have demonstrated the effectiveness and superiority of our proposed method for object detection in aerial imagery.
Ocean fronts are one of the main transportation means for material and energy in the ocean. The research on detecting ocean fronts has become a hot topic in the last few years. Although current research techniques can accurately detect ocean fronts from remote sensing images, the detection results encompass all ocean fronts present across the entire ocean region. Ocean fronts often exhibit complex and dynamic behavior, with multiple fronts overlapping and covering each other in space and time. As a result, it has become challenging to isolate and independently analyze specific ocean fronts of interest. To address this issue, this paper proposes an image segmentation algorithm to segment ocean fronts. This segmentation method contributes to the advancement of oceanographic studies and supports improved understanding of the intricate dynamics within marine ecosystems.
In recent years, ocean front tracking is of vital importance in ocean-related research, and many algorithms have been proposed to identify ocean fronts. However, all these methods focus on single frame ocean-front classification instead of ocean-front tracking. In this paper, we propose an ocean-front tracking dataset (OFTraD) and apply GoogLeNet inception network to track ocean fronts in video sequences. Firstly, the video sequence is split into image blocks, then the image blocks are classified into ocean-front and background by GoogLeNet inception network. Finally, the labeled image blocks are used to reconstruct the video sequence. Experiments show that our algorithm can achieve accurate tracking results.
High-resolution ocean remote sensing images are of vital importance in the research field of ocean remote sensing. However, the available ocean remote sensing images are composed of averaged data, whose resolution is lower than the instant remote sensing images. In this paper, we propose a very deep super-resolution learning model for remote-sensing image super-resolution. In our research, we target satellite-derived sea surface temperature (SST) images, a typical kind of ocean remote sensing image, as a specific case study of super-resolution on remote sensing images. In this paper, we propose a novel model architecture based on the very deep super-resolution (VDSR) model, to further enhance its performance. Furthermore, we evaluate the peak signal-to-noise ratio (PSNR) and perceptual loss of the model trained on the natural images and SST frames. We designed and applied our model to the China Ocean SST database, the Ocean SST database, and the Ocean-Front databases, all containing remote sensing images captured by advanced very high resolution radiometers (AVHRR). Experimental results show that our model performs better than the state-of-the-art models on SST frames.
Ocean fronts have been a subject of study for many years, a variety of methods and algorithms have been proposed to address the problem of ocean fronts. However, all these existing ocean front recognition methods are built upon human expertise in defining the front based on subjective thresholds of relevant physical variables. This paper proposes a deep learning approach for ocean front recognition that is able to automatically recognize the front. We first investigated four existing deep architectures, i.e., AlexNet, CaffeNet, GoogLeNet, and VGGNet, for the ocean front recognition task using remote sensing (RS) data. We then propose a deep network with fewer layers compared to existing architecture for the front recognition task. This network has a total of five learnable layers. In addition, we extended the proposed network to recognize and classify the front into strong and weak ones. We evaluated and analyzed the proposed network with two strategies of exploiting the deep model: full-training and fine-tuning. Experiments are conducted on three different RS image datasets, which have different properties. Experimental results show that our model can produce accurate recognition results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.