A point cloud can provide a detailed three dimensional (3D) description of a scene. Partitioning of a point cloud into semantic classes is important for scene understanding, which can be used in autonomous navigation for unmanned vehicles and in applications including surveillance, mapping, and reconnaissance. In this paper, we give a review of recent machine learning techniques for semantic segmentation of point clouds from scanning lidars and an overview of model compression techniques. We focus especially on scan-based learning approaches, which operate on single sensor sweeps. These methods do not require data registration and are suitable for real-time applications. We demonstrate how these semantic segmentation techniques can be used in defence applications in surveillance or mapping scenarios with a scanning lidar mounted on a small UAV.
Currently there is a considerable development of small, lightweight, lidar systems, for applications in autonomous cars. The development gives possibilities to equip small UAVs with this type of sensor. Adding an active sensor component, beside the more common passive UAV sensors, can give additional capabilities. This paper gives experimental examples of lidar data and discusses applications and capabilities for the platform and sensor concept including the combination with data from other sensors. The lidar can be used for accurate 3D measurements and has a potential for detection of partly occluded objects. Additionally, positioning of the UAV can be obtained by a combination of lidar data and data from other low-cost sensors (such as inertial measurement units). The capabilities are attainable both for indoor and outdoor shortrange applications.
Small high-resolution lidar systems can be used in a broad range of applications such as object detection, foliage penetration, and positioning. In this study, a scanning lidar was used together with two visual cameras and a low-cost inertial measurement unit to obtain precise positioning in forest environments. Position accuracy better than 0.05 % of the traversed path was obtained with the system. The visual cameras and the inertial measurement unit were used to estimate an approximate trajectory and the lidar data were applied to refine the positioning using high level and low level features extracted from the lidar data. Low level features were characterized by planes and sections of the tree stems, and high level features by whole trees. The system was able to operate without support from satellite navigation data or other positioning support. The results can be applied for navigation in forest environments e.g. for small unmanned aerial vehicles or ground vehicles.
Imaging for long-range target classification has its practical limitations due to the demand on high transverse sensor resolution connected to small pixel sizes, long focal lengths and large aperture optics. It is therefore motivated to look at other techniques like laser range profiling where the demand on transverse resolution is moderate but high in the depth domain.
Laser range profiling is attractive because it can be seen as an extension from an ordinary laser range finder. The same laser can also be used for active imaging when the target comes closer and is angular resolved. This paper will discuss laser profiling for target recognition both as a standalone method and in combination with low transverse resolution imaging. Example of both simulated and experimental data for stationary and in flight targets will be investigated and analyzed for target classification purposes.
Laser radar 3D imaging has the potential to improve target recognition in many scenarios. One case that is challenging for most optical sensors is to recognize targets hidden in vegetation or behind camouflage. The range resolution of timeof- flight 3D sensors allows segmentation of obscuration and target if the surfaces are separated far enough so that they can be resolved as two distances. Systems based on time-correlated single-photon counting (TCSPC) have the potential to resolve surfaces closer to each other compared to laser radar systems based on proportional mode detection technologies and is therefore especially interesting. Photon counting detection is commonly performed with Geigermode Avalanche Photodiodes (GmAPD) that have the disadvantage that they can only detect one photon per laser pulse per pixel. A strong return from an obscuring object may saturate the detector and thus limit the possibility to detect the hidden target even if photons from the target reach the detector. The operational range where good foliage penetration is observed is therefore relatively narrow for GmAPD systems. In this paper we investigate the penetration capability through semi-transparent surfaces for a laser radar with a 128×32 pixel GmAPD array and a 1542 nm wavelength laser operating at a pulse repetition frequency of 90 kHz. In the evaluation a screen was placed behind different canvases with varying transmissions and the detected signals from the surfaces for different laser intensities were measured. The maximum return from the second surface occurs when the total detection probability is around 0.65-0.75 per pulse. At higher laser excitation power the signal from the second surface decreases. To optimize the foliage penetration capability it is thus necessary to adaptively control the laser power to keep the returned signal within this region. In addition to the experimental results, simulations to study the influence of the pulse energy on penetration through foliage in a scene with targets behind vegetation are presented. The optimum detection of targets occurs here at a slightly higher total photon count rate probability because a number of pixel have no obscuration in front the target in their field of view.
Long range identification (ID) or ID at closer range of small targets has its limitations in imaging due to the demand for very high-transverse sensor resolution. This is, therefore, a motivation to look for one-dimensional laser techniques for target ID. These include laser vibrometry and laser range profiling. Laser vibrometry can give good results, but is not always robust as it is sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angularly resolved. Our laser range profiler is based on a laser with a pulse width of 6 ns (full width half maximum). This paper will show both experimental and simulated results for laser range profiling of small boats out to a 6 to 7-km range and a unmanned arrial vehicle (UAV) mockup at close range (1.3 km). The naval experiments took place in the Baltic Sea using many other active and passive electro-optical sensors in addition to the profiling system. The UAV experiments showed the need for a high-range resolution, thus we used a photon counting system in addition to the more conventional profiler used in the naval experiments. This paper shows the influence of target pose and range resolution on the capability of classification. The typical resolution (in our case 0.7 m) obtainable with a conventional range finder type of sensor can be used for large target classification with a depth structure over 5 to 10 m or more, but for smaller targets such as a UAV a high resolution (in our case 7.5 mm) is needed to reveal depth structures and surface shapes. This paper also shows the need for 3-D target information to build libraries for comparison of measured and simulated range profiles. At closer ranges, full 3-D images should be preferable.
The purpose of this study is to present and evaluate the benefit and capabilities of high resolution 3D data from unmanned aircraft, especially in conditions where existing methods (passive imaging, 3D photogrammetry) have limited capability. Examples of applications are detection of obscured objects under vegetation, change detection, detection in dark or shadowed environments, and an immediate geometric documentation of an area of interest. Applications are exemplified with experimental data from our small UAV test platform 3DUAV with an integrated rotating laser scanner, and with ground truth data collected with a terrestrial laser scanner. We process lidar data combined with inertial navigation system (INS) data for generation of a highly accurate point cloud. The combination of INS and lidar data is achieved in a dynamic calibration process that compensates for the navigation errors from the lowcost and light-weight MEMS based (microelectromechanical systems) INS. This system allows for studies of the whole data collection-processing-application chain and also serves as a platform for further development. We evaluate the applications in relation to system aspects such as survey time, resolution and target detection capabilities. Our results indicate that several target detection/classification scenarios are feasible within reasonable survey times from a few minutes (cars, persons and larger objects) to about 30 minutes for detection and possibly recognition of smaller targets.
The detection and classification of small surface and airborne targets at long ranges is a growing need for naval security. Long range ID or ID at closer range of small targets has its limitations in imaging due to the demand on very high transverse sensor resolution. It is therefore motivated to look for 1D laser techniques for target ID. These include vibrometry, and laser range profiling. Vibrometry can give good results but is also sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angular resolved. The present paper will show both experimental and simulated results for laser range profiling of small boats out to 6-7 km range and a UAV mockup at close range (1.3 km). We obtained good results with the profiling system both for target detection and recognition. Comparison of experimental and simulated range waveforms based on CAD models of the target support the idea of having a profiling system as a first recognition sensor and thus narrowing the search space for the automatic target recognition based on imaging at close ranges. The naval experiments took place in the Baltic Sea with many other active and passive EO sensors beside the profiling system. Discussion of data fusion between laser profiling and imaging systems will be given. The UAV experiments were made from the rooftop laboratory at FOI.
KEYWORDS: Unmanned aerial vehicles, Sensors, LIDAR, 3D modeling, 3D acquisition, Visualization, Target detection, Clouds, Signal processing, Data modeling
This paper summarizes on-going work on 3D sensing and imaging for unmanned aerial vehicles UAV carried laser sensors. We study sensor concepts, UAVs suitable for carrying the sensors, and signal processing for mapping and target detection applications. We also perform user studies together with the Swedish armed forces, to evaluate usage in their mission cycle and interviews to clarify how to present data.
Two ladar sensor concepts for mounting in UAV are studied. The discussion is based on known performance in commercial ladar systems today and predicted performance in future UAV applications. The small UAV is equipped with a short-range scanning ladar. The system is aimed for quick situational analysis of small areas and for documentation of a situation. The large UAV is equipped with a high-performing photon counting ladar with matrix detector. Its purpose is to support large-area surveillance, intelligence and mapping operations. Based on these sensors and their performance, signal and image processing support for data analysis is analyzed. Generated data amounts are estimated and demands on data storage capacity and data transfer is analyzed.
We have tested the usage of 3D mapping together with military rangers. We tested to use 3D mapping in the planning phase and as last-minute intelligence update of the target. Feedback from these tests will be presented. We are performing interviews with various military professions, to get better understanding of how 3D data are used and interpreted. We discuss approaches of how to present data from 3D imaging sensor for a user.
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution
and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor
on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and
recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over
the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing
parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the
accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height
accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E
lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point
cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with
lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the
navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch,
roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based
(microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved
accuracy compared to processing based solely on INS data.
Airborne bathymetric lidar has proven to be a valuable sensor for rapid and accurate sounding of shallow water areas.
With advanced processing of the lidar data, detailed mapping of the sea floor with various objects and vegetation is
possible. This mapping capability has a wide range of applications including detection of mine-like objects, mapping
marine natural resources, and fish spawning areas, as well as supporting the fulfillment of national and international
environmental monitoring directives. Although data sets collected by subsea systems give a high degree of credibility
they can benefit from a combination with lidar for surveying and monitoring larger areas. With lidar-based sea floor
maps containing information of substrate and attached vegetation, the field investigations become more efficient. Field
data collection can be directed into selected areas and even focused to identification of specific targets detected in the
lidar map. The purpose of this work is to describe the performance for detection and classification of sea floor objects
and vegetation, for the lidar seeing through the water column. With both experimental and simulated data we examine
the lidar signal characteristics depending on bottom depth, substrate type, and vegetation. The experimental evaluation is
based on lidar data from field documented sites, where field data were taken from underwater video recordings. To be
able to accurately extract the information from the received lidar signal, it is necessary to account for the air-water
interface and the water medium. The information content is hidden in the lidar depth data, also referred to as point data,
and also in the shape of the received lidar waveform. The returned lidar signal is affected by environmental factors such
as bottom depth and water turbidity, as well as lidar system factors such as laser beam footprint size and sounding
density.
Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance
of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are
gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as
well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data
rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities.
Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small
areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive
(multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another
advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive
photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a
small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the
Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of
20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft
forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of
high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and
on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and
a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the capability for target
surface reflectivity estimation based on measurements on calibration standards. Initial results of the general mapping
capability including the detection through partly obscured environments is demonstrated through field data collection
and analysis.
While land maps of vegetation cover and substrate types exist, similar underwater maps are rare or almost non-existing. We developed the use of airborne bathymetric lidar mapping and high resolution satellite data to a combined method for shallow sea floor classification. A classification accuracy of about 80% is possible for six classes of substrate and vegetation, when validated against field data taken from underwater video recordings. The method utilizes lidar data directly (topography, slopes) and as means for correction of image data for water depth and turbidity. In this paper we present results using WorldView-2 imagery and data from the HawkEye II lidar system in a Swedish archipelago area.
There is a demand from the authorities to have good maps of the coastal environment for their exploitation and
preservation of the coastal areas. The goal for environmental mapping and monitoring is to differentiate between vegetation and non-vegetated bottoms and, if possible, to differentiate between species. Airborne lidar bathymetry is an
interesting method for mapping shallow underwater habitats. In general, the maximum depth range for airborne laser
exceeds the possible depth range for passive sensors. Today, operational lidar systems are able to capture the bottom (or
vegetation) topography as well as estimations of the bottom reflectivity using e.g. reflected bottom pulse power. In this
paper we study the possibilities and advantages for environmental mapping, if laser sensing would be further developed from single wavelength depth sounding systems to include multiple emission wavelengths and fluorescence receiver
channels. Our results show that an airborne fluorescence lidar has several interesting features which might be useful in mapping underwater habitats. An example is the laser induced fluorescence giving rise to the emission spectrum which
could be used for classification together with the elastic lidar signal. In the first part of our study, vegetation and substrate samples were collected and their spectral reflectance and fluorescence were subsequently measured in
laboratory. A laser wavelength of 532 nm was used for excitation of the samples. The choice of 532 nm as excitation
wavelength is motivated by the fact that this wavelength is commonly used in bathymetric laser scanners and that the
excitation wavelengths are limited to the visual region as e.g. ultraviolet radiation is highly attenuated in water. The
second part of our work consisted of theoretical performance calculations for a potential real system, and comparison of
separability between species and substrate signatures using selected wavelength regions for fluorescence sensing.
In addition to the well-developed bathymetric LiDAR (Light Detecting and Ranging) remote sensing technique,
Airborne Hydrography AB (AHAB) has presented a new bathymetric LiDAR reflectance processing technique which
provides new applications of producing seafloor reflectance image, seafloor identification and classification. In the past
decade, HawkEye II bathymetric LiDAR systems produced by AHAB collected and processed over 100,000 square
kilometer LiDAR reflectance data in more than ten countries in Europe, America, Oceania, Indian Ocean and Asia. In
this paper, we introduce the background of bathymetry LiDAR, the algorithm and methods used in the bathymetric
LiDAR reflectance processing, the reflectance image and seafloor classification applications.
Small underwater objects such as vehicles and divers can pose threats to fixed installations and ships. For ships, these
threats are present both at sea and in harbors. Shallow underwater targets, including drifting mines, are difficult to detect
with acoustic methods and thus complementary methods are required. If an airborne platform is available, some of those
targets could be detected by passive optical means. However, for sensing from a ship or from land, optical detection can
be highly improved by use of a pulsed laser system. We present simulated data of importance for the design of a lidar
system with low incidence angle with respect to the water surface. We also present our first experimental data from
underwater target detection with an incidence angle of 5 degrees.
Airborne depth sounding lidar has proven to be a valuable sensor for rapid and accurate sounding of shallow areas. The
received lidar pulse echo contains information of the sea floor depth, but also other data can be extracted. We currently
perform work on bottom classification and water turbidity estimation based on lidar data. In this paper we present the
theoretical background and experimental results on bottom classification. The algorithms are developed from simulations
and then tested on experimental data from the operational airborne lidar system Hawk Eye II. We compare the results to
field data taken from underwater video recordings. Our results indicate that bottom classification from airborne lidar data
can be made with high accuracy.
This presentation will review some of the work on range gated imaging undertaken at the Swedish Defence Research
Agency (FOI). Different kind of systems covering the visible to 1.5 μm region have been studied and image examples
from various field campaigns will be given. Example of potential applications will be discussed.
In this work we evaluate the imaging performance of a range-gated underwater system in natural waters. Trials have
been performed in both turbid and clear water. The field trials show that images can be acquired at significantly longer
distances with the gated camera, compared to a conventional video camera. The distance where a target can be detected
is increased by a factor of 2. For images suitable for object identification, the range improvement factor is typically 1.5.
We also show examples of image processing of the range-gated images, which increases the image quality significantly.
The effects of surface waves on laser beam transmission through the sea surface are experimentaly examined. The purpose is to obtain experimental data for comparison with laser propagation models. Simultaneous measurements of the time and space variability of the air-sea interface are performed. A submerged screen, filmed by an underwater video camera, is used to measure the downwelling irradiance profiles. The measurements are made in calm winds, in a sheltered harbor environment. Calibrated values of downwelling irradiance are obtained by reference measurements in laboratory. Two significant consequences of transmission through the sea surface are investigated: beam width at different depths averaged over several surface wave periods, and surfce wave focusing or defocusing quantified by the irradiance fractional fluctuations (standard deviation divided by the mean). We compare the irradiance fractional fluctuations from our experiment with published data from underwater sunlight measurements and from laser beam simulations. The irradiance fractional fluctuations show a near-surface maximum and decay with depth. Low wind speed is expected to increase the fractional fluctuations caused by wave focusing effects. Our measurements qualitatively agree with the compared data, but exhibit larger fractional fluctuations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.