With the increasing application of unmanned aerial vehicles (UAVs) in all walks of life, the demand for autonomous navigation of UAVs has become increasingly urgent, especially in areas where there is no GPS signal or where the GPS signal is interfered with. This paper proposes an integrated navigation method based on visual navigation and inertial navigation. First, it leverages the satellite map as the reference map. During the flight of the UAV, the camera takes pictures of the ground at intervals. And then it matches the photo of the UAV and the reference map. In this way, the visual navigation system localizes the UAV and evalutate the the reliability of the localization. Finally the Particle Filter algorithm is introduced to fuse the positioning results. To expedite the matching process, we utilize the INS to narrow down the range of satellite maps for matching. Considering the differences between the camera photo and the reference map, this paper introduces SuperPoint and SuperGlue algorithms for feature extraction and matching, respectively. These two algorithms utilize deep neural networks to extract and match the features of the images, enabling the extraction of deep semantic features rather than manual features. Experiment results demonstrate the superior matching effectiveness of the image matching algorithm introduced in this paper. The simulation results show that the cumulative error of INV is greatly reduced after fusing with the results of visual navigation. Since it navigates autonomously, the integrated navigation method offers robust anti-interference capabilities, high autonomy, and better adaptability.
Fire detection is an important measure to protect public safety, avoid casualties and property damage. With the increasing development of artificial intelligence algorithms, deep learning methods based on convolutional neural networks have been applied to fire detection. Although deep learning based fire detection algorithms have made significant progress, there are still problems such as insufficient and imbalanced data, low model accuracy, and insufficient real-time performance. Therefore, we designs a lightweight fire detection algorithm based on deep learning. This method is based on the object detection algorithm YOLOv5. Firstly, a data augmentation algorithm is used to solve the dataset problem, followed by a lightweight improvement on the algorithm's backbone network to improve detection speed. Finally, a feature fusion improvement is performed on the algorithm's neck to improve accuracy. The experimental results show that the above method slightly improves accuracy while reducing weight, and can be better applied to actual fire detection scenarios.
Most of the edge devices have restricted computational resource, such as ASIC, FPGA or other embedded systems, which cause an efficient problem for neural network model to run in these hardware platform. Model quantization is an effective optimization technique for convolutional layer inference of neural network at the cost of little accuracy loss. However, most of quantized methods only accelerate the computation of convolutional layer, other layers of a model are still inferred by floating-point calculation. FPGA is not an applicable platform for floating-point calculation. In this paper, a completely quantized method is proposed for inference of neural network on FPGA platform. All the calculation of a model inference is performed by quantized value. More quantization leads to more accuracy loss. In order to preserve accuracy, several techniques are used for different functional layer of the neural model. Such as activation layer uses bitwise operation instead of mutilation, concatenate layer use respective parameter for different input layer. To evaluate the effectiveness and efficiency of the proposed method, we implement a quantized light weight detection network, and deploying it on FPGA platform. The experimental result demonstrates that our quantized method is a very low accuracy loss method and is high efficient for neural network inference on FPGA platform. The proposed quantized inference method is highly beneficial for neural model to deploy on low power consumption devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.