PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13049, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After a short survey, features of conical lenses solving the design problems of laser micro radars for ophthalmology are analyzed, particularly, in laser radars based on ray tracing principles. Adding the function of triangulation, new information about the eye structures can be acquired. Configurations are proposed for triangulation control in 3D space, either simultaneously for all beams, or sequentially in time, one-by-one. The beam dispatching is provided by acousto-optical deflector. Another task, easily resolvable with axicon, is protecting the retina during laser-assisted measurements of the corneal parameters. Beam path patterns are analyzed for each of the proposed applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Holography (DH) is a coherent imaging technology for tactical applications. Because DH is a phase sensitive imaging technology it has multifunction capability. Some of these functions include, 2D, 3D and vibration imaging. Direct access to image phase enables digital image aberration correction. In this presentation we present an overview of DH technology and provide theory and example imagery for aberration corrected 2D, 3D, and vibration imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a transformative approach to 3D lidar imaging through the Multi-Tone Continuous Wave (MTCW) coherent lidar system. Addressing coherence length constraints in coherent continuous wave (CW) lidar systems. We present a method utilizing static RF modulation frequencies to achieve 3D imaging. Our system demonstrates the capability to measure distances up to 11km, surpassing the 950m coherence length of the laser. This approach has far-reaching implications for applications requiring extended ranging capabilities, marking a significant evolution in coherent lidar technology. The study concludes by highlighting the potential impact on various fields, including autonomous navigation and remote sensing, thereby paving the way for enhanced spatial awareness in diverse applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on the design, measurement capabilities, and measured performance of a new Small All-range Lidar (SALI). The lidar transmitter uses a 1.55-μm Erbium-Doped Fiber-Amplifier (EDFA) laser modulated with a return-to-zero pseudo-noise (RZPN) code. The receiver uses a 2x8-pixel HgCdTe avalanche photodiode (APD) array in linear single photon detection mode of operation. The receiver electronics calculate the target range by correlating the received signal with a patented 3-state RZPN kernel. A field programmable gate array (FPGA) processes the signal in real time up to a 120 Hz measurement rate for eight parallel receiver channels. The output power of the fiber laser, the detector gain, and the receiver integration time are all adjustable so that it can measure planetary surface at range from more than 100 kilometers down to a fraction of a meter without saturation. SALI is primarily designed for mapping planetary bodies from orbit but can also be used as a guidance sensor for sample collection or landing. The instrument uses all standard components from the fiber optic communications industry except for the detector and it can be built at a much lower cost compared to previous planetary lidars. SALI is also modular and can use different lasers and detectors at different wavelengths and different receiver telescope sizes to best fit the specific mission requirements. We have recently completed the instrument integration and performed function and performance testing. The measured performance is close to the prediction given in our earlier publications. We will soon conduct a vibration and thermal-vacuum tests to demonstrate its readiness for use in a space mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a Geiger-mode lidar system for detecting individual birds in large flocks and tracking them using a real-time processing system. We present initial results of field tests conducted in North Dakota observing large flocks of red-wing black birds and their predators. We analyze the signals and tracks arising from the birds and from a small UAS in the scene. We also present data from testing in Lawrence, Massachusetts observing American Crows in which we tested a real-time processing system. The exquisite sensitivity and rapid measurement rates achievable with Geiger-mode lidars enable rapid surveillance of airspaces for the detection of small targets (cross section of 100 cm2 at 20 percent reflectivity) at operationally relevant standoff (400 - 800 m) with high revisit rates (5 - 10 Hz). The objective of this demonstration was the tracking of over 1000 birds in a flock occupying a volume of interest of (100 m)3 at a standoff of 400 m. We will present initial results from field campaigns observing red-wing blackbirds, predators and American crows.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small unmanned aerial systems (UASs) have found many applications within both the defense and commercial sectors. [1] With the increasing use of small UASs, it is desired to equip them with RF sensors/payloads to permit them to work together to form a coherent beam on a target. [2] To do this, precise time synchronization among the UASs is essential. Current techniques rely on either the use of GPS or an embedded signal from the target to time synchronize multiple UASs.
The goal is to obtain accuracy in timing (10 to 100 picoseconds) and phase coherence of about 1/10 of a relevant RF operating wavelength (UHF or higher band) between nodes.
4S Silversword Software and Services (4S) is using their Free Space Optical (FSO) system identified as Through-the- Air Link Optical Component (TALOC) in conjunction with 915 MHz RF emissions to obtain sub-nanosecond time of flight measurements, corresponding to fractional wavelength position precision, using free space optical measurements.
In this paper, we show the calculations, technology background, and the results of system tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global digital elevation models (DEMs) generated from spaceborne synthetic aperture radar (SAR), such as the Copernicus 30 m DEM , provide exceptional coverage of the Earth's topography. However, SAR-derived DEMs struggle to accurately map terrain under forest canopies and in certain topographic conditions. In contrast, spaceborne laser altimeters like ICESat-2 can accurately measure ground elevations in areas where SAR sensors struggle, but the lack of dense coverage from laser altimetry precludes creation of complete global DEMs. This work aims to combine the accuracy of laser altimetry with the coverage of SAR by using deep learning algorithms. A convolutional neural network (CNN) is trained to correct Copernicus 30 m DEM using sparse but accurate ICESat-2 elevations in the south-east United States around South Carolina. Model inputs include temporally coincident imagery from Sentinel-2A, other SAR inputs from Sentinel-1B, as well as Copernicus 30 m DEM. The CNN is trained to correct the elevation of each individual pixel, allowing for the use of sparse ICESat-2 measurements. This allows the creation of a global DEM with the coverage of SAR and precision closer to that of laser altimetry. The resulting CNN model reduced ground elevation RMSE from 8.65 m to 2.62 m. The corrected DEM has potential to benefit numerous scientific endeavors requiring accurate global topographic information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a modern C++20/Python toolbox for feature extraction and geospatial data manipulation developed at the Center for Space Research and used for a variety of data processing applications in our lab. The toolbox provides powerful feature extraction tools in a suite of flexible, modular applications that can be used to compose geospatial data processing pipelines. The toolbox is exposed through a web API and is planned for release later this year.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ICESat-2's Advanced Terrain Laser Altimeter System (ATLAS) can penetrate water bodies, enabling accurate and detailed measurements of bathymetry in diverse aquatic environments. ATLAS's capabilities have made it a popular tool for understanding underwater topography and characteristics. In this paper, we present a deep residual classification network used to identify ICESat-2 bathymetry and water surface photons. The training data used to train the model was derived from both hand-labeled ICESat-2 groundtracks and from synthetic data produced by custom ICESat-2 ground track simulator software. This investigation was unique in that it used a very wide variety of ground tracks across the entire globe, and it also used several different metrics to summarize the classification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A limitation of traditional airborne and spaceborne lidar instruments is the inability to provide data products in real time. This challenge is compounded by typical research-driven desires to build ever more complicated lidar sensors, which overlooks the need to provide simple, but timely, data products to operational forecast models. Machine learning techniques using convolution neural networks (CNNs) have been developed and applied to single wavelength (e.g., 1064 nm) data from the airborne Cloud Physics Lidar (CPL) and have shown encouraging results for feature detection at finer resolutions compared to traditional methods, notably during noisy daytime conditions. Current technologies and properly scoped measurement goals, not intended as be-all/end-all research tools, permit designs for miniaturized lidar sensors that can be placed on drones and, ultimately, in constellations of minisats. Use of advanced machine learning techniques for data processing permits generation of real time data products that can be quickly assimilated into predictive models (for air quality and human health) and for generating real-time data products for decision making (such as hazardous plume detection and monitoring).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A small unmanned aerial vehicle (sUAV) can be used to reconstruct a 3D scene by capturing frames consisting of LiDAR and aerial photography data, creating a textured digital surface model (TDSM) with the full LiDAR point cloud and an overlaid registered image. Forming a complete 3D scene using texel images (fused LiDAR and digital image scans) from an entire flight can be computationally prohibitive on low-cost hardware, so a streaming bundle adjustment algorithm can be used to process the data using a sliding window. The streaming algorithm uses less memory and is faster than a full bundle adjustment. Depending on the flight pattern, matching points in the scene may be visible from frames which are not adjacent in time, so reconstructing a complete scene can take into account matching points from non-adjacent frames to better correct for error.
A modification to the streaming bundle adjustment algorithm is described which finds overlap in frames which are not adjacent in time to correct for error. This algorithm also addresses the loop-closing problem that occurs when the sensor returns to the starting point of a survey. Flight data from a sensor constructed with low cost, commercial off-the-shelf parts is used to demonstrate how 3D scene reconstruction using this algorithm can correct for errors compared with data gathered from a full-scale aircraft. Examples of the resulting TDSMs are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active illumination with underwater laser imaging has unique advantages for the identification of underwater objects, especially in shallow waters, complex marine environments and inaccessible locations. However, backscattered light from the water particulates can blur the resulting laser images. To improve the quality of underwater laser images, we have examined a wide range of image enhancement (IE) and restoration (IR) techniques. In our recent prior work, we have experimentally evaluated the efficacy of over 20 IE/IR methods specifically for the underwater object recognition, examining the impact of artifacts introduced by IE/IR on the deep neural network (DNN) architecture required for optimal classification accuracy. This paper builds on this work by considering the effect of polarization on underwater image restoration and object recognition. Using a one-of-a-kind multi-polarization underwater laser image dataset, this paper examines the image of polarization on the efficacy of IE/IR algorithms and proposes a deep neural network (DNN) for fusing and jointly exploiting the multi-polarization data for improved underwater object recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The key insight of this paper is that rigid lattices used in many coincidence processing algorithms for Geiger-mode lidar data, due to the varied scattering properties of materials in an imaged scene, necessarily lead to pathological density estimates in cases where too few detections fall into discretized data chimneys. This paper proposes the use of dynamic lattices to ensure detection counts are bounded. A specific example of binary space partitioning is presented where a minimum number of detections is specified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lidar tomography is a method that constructs high-resolution images of objects from multiple range projections along different projection axes. This approach is one way to overcome traditional limitations in remote sensing with focal imaging such as diffraction, optical aberrations, and air turbulence. We have shown previously through detailed modelling and simulation that lidar tomography can generate resolved imagery of objects from a moving platform if sufficient diversity of view angles and appropriate geolocation accuracy requirements can be met. Here we show that the geolocation accuracy requirements can be met through a data-driven approach that does not require accurate knowledge of the platform’s position relative to the object being imaged. This alleviates a significant technical burden of motion tracking and opens the way for a more practical implementation of the lidar tomography technique for remote sensing and imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The triggering of Geiger-mode Avalanche Photodiodes (GmAPDs) depends on the generation of primary electrons and subsequent current avalanche. The number of primary electrons generated in a GmAPD in a given interval of time is governed by Poisson statistics, with the parameter determined by the integral of the primary electron generation rate over the interval of interest. In synchronous GmAPD cameras, avalanche events are resolved up to a certain time bin width and subsequent avalanche events cannot occur until the APDs are re-armed, incurring blocking loss on the measured signal. Reconstruction of the temporal profile of the primary electron generation rate of a single pixel of a GmAPD camera is presented. This includes describing the probability distribution of trigger events over the time bins, maximum likelihood estimates of Poisson parameters, blocking loss correction, and estimates on reconstruction error and sampling error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Areté has developed and demonstrated a LIDAR sensing technology for detecting and locating distant small objects near the horizon with rapid 360-degree scans using low-cost components. This technology has been demonstrated for detection of sUAS. The technology utilizes a coordinated high-speed, spinning, scanner and imaging receiver to obtain range-angleangle measurements within large solid angles. Ranging is achieved by angular displacements of images due to receiver angular rotation during the time of flight from the sensor to a target and back.
Because ranging is not achieved by laser modulation, the laser can be chosen as a continuous wave laser enabling much greater flexibility in system design for greater efficiency and lower cost. The sensor provides continuous coverage without gaps and has been demonstrated in detection while scanning. Instead of using custom high-bandwidth detectors, ranging is measured in the spatial domain at a camera image plane. The sensor technology has no range ambiguity constraints as typically result in high pulse-rate LIDAR systems often used for rapid large area coverage.
Areté has demonstrated a bistatic version of the sensing approach with separate transmitter and receiver scanners sharing a common rotational axis and with an event camera imager. An event camera at the image plane, and appropriate signal processing, reduces sensitivity to solar background.
Because of the technologies compatibility with narrow linewidth CW lasers, it is expected to also have additional utility for detection of chemical agent dispersals as well as low-SWaP vertical profiling DiAL systems for atmospheric monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lidar receivers with exquisitely sensitive Geiger-mode detectors are able to detect surfaces even when the line of sight from the lidar sensor to the surface is highly occluded by intervening forest canopy. Additionally, repeated scanning of a region of interest from a diversity of perspectives increases the likelihood of imaging any given surface through at least one substantially unoccluded line of sight. Together, these techniques allow airborne lidar collections to be tailored to achieve comprehensive human activity layer (HAL) data collection, even in areas with dense foliage. We present a study of the performance of a 3DEO lidar for foliage poke-through applications, exploiting both its Geiger-mode sensitivity and agile geo-referenced scanning system. We present two methods for estimating the utility of the resulting 3D point clouds in the HAL, near the ground, based on the spatial statistics of the point clouds. We apply those methods to airborne Geiger-mode lidar data of deciduous forests in Massachusetts and conifers in the US Pacific Northwest. We quantify the completeness of the point clouds as a function of the collection parameters. We then use this analysis to estimate the ideal collection parameters for a Geiger-mode lidar with georeferenced scanning to yield a high-utility data product.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automotive lidar is rapidly becoming a mainstream enabling technology for object detection and localization in advanced driver assistance systems and automated vehicles like robotaxis. In response to this demand, lidar characterization standards and specifications are being developed, with DIN SAE specification 91471 as one of the first published efforts. A core purpose of this presentation is to compare the recommendations in specification 91471 to what automotive lidar manufacturers are publishing and to discuss the differences. We will also make a case for employing component specifications like these in the context of vehicle and perception system level goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple space applications require infra-red photodiodes, including spectroscopy, optical communication links, and rapid Doppler shift LIDAR. Extended InGaAs photodiodes with 2.4-micron cutoff wavelength have been recently shown to be resilient to irradiation with Protons, Alpha Particles, Carbon Ions, and Iron Ions for fluence levels corresponding to multi-year Low Earth Orbit, Geostationary, inter-planetary, and deep space missions. Our prior studies have shown that the radiation-induced displacement damage may lead to some elevation in photodiode’s leakage current, without significant sign of ionization damage. To further confirm this finding, these devices were subjected to Gamma rays to explicitly measure the effect of ionization damage only. We have successfully tested 290 μm diameter, 2.4-micron wavelength, Extended InGaAs photodiodes coupled with single mode fiber for gamma radiation. Three devices were cooled to dry ice temperatures (~-71° C) and subjected to two rounds of 662 keV gamma rays from Cesium-137 for 15 krad (water) for a cumulative dose of 30 krad (water). The devices were reverse biased at 100 mV and their leakage current was monitored in-situ to simulate their function as exposed to radiation in space environment. The in-situ data showed slight increase in leakage current in the presence of gamma radiation, and returned to original value once the gamma rays were turned off, thus proving the resilience of Extended InGaAs Photodiodes to ionization damage. These results were corroborated with detailed pre- and post-radiation measurements, which also demonstrated unchanged quantum efficiency and bandwidth over a wide range of operating temperatures, from -71 °C to +20 °C.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long-range, high-speed, free-space LIDAR systems face challenges from ambient background noise. Maximizing the signal-to-noise ratio (SNR) is vital for extending the range and increasing the scanning speed. One effective strategy in steered LIDAR systems is using field-of-view (FOV) filtering to retain signals and suppress noise. Pixel-based approaches with sensitive detector arrays are costly, especially at near-infrared wavelengths. This work employs a digital micromirror device (DMD) as a pseudo-pixel array. It redirects signal light to a single-pixel detector while routing noise light to a beam dump. This work explores a simple experimental setup to explore the DMD’s range improvement potential. Ambient noise rejections ratios greater than 20 were exhibited using a 6×6 pseudo-pixel array on the DMD, resulting in a 2.88 fold range improvement in a theoretical LIDAR system. This approach thus offers a means of enhancing long-range, high-speed, free-space LIDAR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light Detection And Ranging (LiDAR) is pivotal across industries like autonomous vehicles, mapping, and defense, requiring precise 3D spatial data attainable only through active sensing. Traditional detectors, such as linear mode or Geiger mode avalanche photodiodes (APDs), have limitations. Linear mode APDs (LM-APD) provide low-light sensing but with limited gain values, particularly costly in HgCdTe. Geiger mode APDs (GMAPD) offer greater sensitivity but operate as switches with a reset time, impacting efficiency. The discrete amplification photon detector (DAPD) aims to overcome these limitations by integrating negative feedback to quench avalanching gain, providing single-photon detection with faster reset times and high gain. We present characterization results of the DAPD, including sensitivity, background noise, and reset time, crucial for LiDAR viability. This advancement not only enhances LiDAR performance but also broadens its applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Exciting Technology developed an optical beam steering device for NASA Langley Research Center’s multifunctional flash LiDAR for lunar landing missions that would benefit from a low C-SWaP, highly-capable beam steering technology. The beam steering technology can also be applied to their Navigation Doppler LiDAR (NDL) system in the future. These LiDAR sensors provide high-resolution surface elevation maps and precise relative proximity, velocity, and orientation data during vehicle descent,1 requiring fast, wide beam steering with maintained optical quality and performance. Existing optical beam steering technology for large apertures and wide angles is restricted to classic gimbals, which are expensive and bulky with slow slew rates, or Risley prisms, which are heavy with low optical quality. Non-mechanical solutions currently are either not mature or too expensive to fabricate for larger transmissive apertures.2 Exciting Technology has built a mechanical beam steering device to demonstrate its beam steering technology that can be integrated into an existing LiDAR system to magnify and steer a 50mm beam to ±6° angle. The demonstration unit uses commercially available motors and stages to highlight the optical capability, and has a path to optimize the C-SWaP and to ruggedize for space application without bulky hardware. The developed beam steerer will correct for vehicle attitude changes during Hazard Detection and Avoidance (HDA) phase to point the LiDAR at the designated landing site. The beam steerer can be configured for nadir pointing during LiDAR altimetry and Terrain Relative Navigation (TRN) phases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Indoor positioning and navigation have emerged as critical areas of research due to the limitations of GPS in enclosed environments. This study presents an innovative approach to high-precision indoor localization by employing the Extended Kalman Filter (EKF). Unlike traditional methods that often suffer from noise and multi-path effects, the EKF methodology accounts for nonlinearities and offers a recursive solution to estimate the state of dynamic systems. We deployed a sensor on a mobile robot that needs to move in an indoor environment while there is a moving obstacle that is moving around. Our findings demonstrate a significant accuracy in locating the obstacle while maneuvering inside the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.