PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper presents the ground and flight test results of an above ground level (AGL) sensing device using pulsed laser ranging. The sensor was developed to integrate with automated flight controls used in manned and unmanned aircraft for precision landings. Using highly accurate, real-time distance measurements AGL readings can be used to feed flight controls to minimizing landing loads and approach speeds. The sensor uses a pulsed laser and receiver to measure the time of flight between the laser fire and return signal from the ground. This line of sight range in conjunction with the air vehicle attitude readings provide a precise (6 inches resolution) above ground level measurement fully compensated for slant range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the last results obtained by using our Imaging Topological Radar (ITR), an high resolution laser scanner aimed at reconstruction 3D digital models of real targets, either single objects or complex scenes. The system, based on amplitude modulation ranging technique, enables to obtain simultaneously a shade-free, high resolution, photographic-like picture and accurate range data in the form of a range image, with resolution depending mainly on the laser modulation frequency (current best performance are ~100μm). The complete target surface is reconstructed from sampled points by using specifically developed software tools. The system has been successfully applied to scan different types of real surfaces (stone, wood, alloy, bones) and is suitable of relevant applications in different fields, ranging from industrial machining to medical diagnostics. We present some relevant examples of 3D reconstruction in the heritage field. Such results were obtained during recent campaigns carried out in situ in various Italian historical and archaeological sites (S. Maria Antiqua in Roman Forum, "Grotta dei cervi" Porto Badisco - Lecce, South Italy). The presented 3D models will be used by cultural heritage conservation authorities for restoration purpose and will available on the Internet for remote inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report the theory and implementation of new approaches for the processing of 3D range data in pursuit of library-based object recognition and registration. The image data is obtained from an active LaDAR system (scanned Time-Correlated Single Photon Count or time-gated Burst Illumination Laser) and describes the range and 3D surface characteristics of remote objects at specific views. The reflected laser signal returns are generally embedded in noise and clutter of uncertain origin. We have applied the Markov Chain Monte Carlo (MCMC) methodology, using random sampling of the search space, to evaluate the number, positions and amplitudes of returns in such scenarios. We describe the use of methods for removing outliers and smoothing these time-of-flight generated depth images, based on least median of squares and anisotropic diffusion, respectively. Further, we outline and demonstrate procedures for registration and pose determination of objects from range data. This consists of three phases, namely point feature extraction, pose clustering and registration. The first computes a surface metric facilitating candidate correspondence determination, using the technique of pair-wise geometric histograms. The second is carried out by a leader-based algorithm, which does not require the number of clusters to be pre-specified. The third is an extension of the iterative closest points (ICP) method, being specifically designed for mesh representations. Collectively, these processes allow an object within a scene - described by a 3D range image - to be matched with a preformed model from a database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A lot of nowadays machine vision tasks imply real-time video stream processing at rates of order of (25 frames / sec) x (640 x 480 pixels / frame) x (24 bit / pixel) = 184 Megabit / sec and higher. As reasonable estimations show, a very limited number of operations (even quite primitive) is allowed to be applied to each pixel of every video frame being processed in order to realize the processing rate required. Hence implementing algorithms widely exploiting formal methods (e.g. sorts of iterative approaches, transformations in multidimensional feature space etc) over the whole video stream becomes unaffordable. A potential (and practically working) overcome is introducing a cascade approach. From architectural viewpoint cascade processing consists of several predefined stages. Video frame passes through them from the zero stage (original frame) to the final stage (processing results). Some stages can be skipped, and the whole processing can be canceled at any stage. Passing through two sequential stages can be viewed as applying some operation to the information left to be processed. The keystone of cascade approach is designing an optimal sequence in which the simplest operations precede the more complex ones, so that the processing mechanism becomes essentially non-uniform and non-linear in terms of processing rates: the great amount of useless data is discriminated on the initial quick stages while the further analysis exploits smart algorithms over comparatively low data flux. The presentation demonstrates a practical example of cascade approach for the task of airplanes' tail identification numbers recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Against the background of nuclear safeguards applications using
commercially available satellite imagery, procedures for wide-area
monitoring of the Iranian nuclear fuel cycle are investigated.
Specifically, object-oriented classification combined with
statistical change detection is applied to high-resolution
imagery. In this context, a feature recognition and analysis tool,
called SEaTH, has been developed for automatic selection of
optimal object class features for subsequent classification. The
application of SEaTH is presented in a case study of the NFRPC
Esfahan, Iran. The transferability of classification models is
discussed regarding the necessity for automation of extensive
monitoring tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Goodrich Sensor Systems has developed a Laser Perimeter Awareness System (LPAS) for surveillance that both detects the presence and tracks the motion of intruders, locating them in range, bearing, and elevation with respect to the position of the sensor. The system places graphic symbols representing the intruders onto a map or aerial photo at the appropriate locations. The coordinates of the intruders are available to cue additional sensors, such as thermal imagers, to automatically slew their fields of view toward the intruder for further investigation. Hence, security personnel can assess whether the detected intruder is a person, an animal, or simply a false alarm and take appropriate action. Detection performance as a function of object size is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of target detection and identification, and NEP vs. gain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser radars offer a potential for high range accuracy and range resolution due to short pulses and high bandwidth receivers. For angular non-resolved targets (1 D profiling) the analysis of the waveform offers the possibility of target recognition due to range profiling. For 2 D and 3 D imaging the angular and range resolution are the critical parameters for target recognition while in other applications such as in lidar mapping the range accuracy plays an important role for the performance. The development of the next generation laser radars including 3 D sensing focal plane arrays (FPAs) enable a full range and intensity image to be captured in one laser shot. Moreover, gated viewing systems also give a viable solution for providing 3 D target information. This paper uses simulation to illustrate the limits for accuracy and range resolution in waveform processing due to the laser pulse shape, detector noise, target shape and reflectivity as well as turbulence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the second phase of our work in project LISATNAS devoted to design the differential absorption lidar (DIAL) based on the mid-infrared (IR) tunable Optical Parametric Oscillator (OPO). Generation of tunable mid-infrared laser radiation using a two stage tandem OPO was demonstrated. The first stage was based on the nonlinear KTP crystal and produced up to 45 mJ of 1.57 μm radiation, while pumped by a commercial Q-switched Nd:YAG laser. The quality of signal beam was improved by the use of unstable resonator. The AgGaSe2 crystal was used in the second stage OPO. Idler energies up to 1.2 mJ were generated in this stage within tuning range from 6 to 12 μm. The receiver consisted of a 250 mm gold mirror telescope, pyroelectric detector with control electronics. Preliminary field test results for detection of H2O using a retroreflector are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The foremost approach to the detection of militarily significant targets in hyperspectral imagery is through the use of anomaly detection processes. These may be applied to imagery in order to identify those pixels that contain materials uncommon in the scene, on the assumption that military targets will match this criterion. The most common approach to anomaly detection for hyperspectral data is through the use of local-area anomaly detection techniques. These extract statistics of the scene in the near-locality of the pixel of interest and then use hypothesis test methods to decide whether the test pixel is anomalous to the training area. Alternative and potentially superior approaches are also available which first attempt to understand the composition of the whole scene in terms of ground cover types. These methods go on to use the extracted scene understanding model to find pixels containing materials that are rare or unseen in the imagery, and mark these as anomalies. This paper compares three anomaly detection approaches, one based on the local area paradigm and two using the scene understanding (global anomaly detection) approach. The latter pair of methods exploit different ways of extracting the scene model. The anomaly detection techniques are examined using real hyperspectral imagery with inserted anomaly pixels. A range of results is presented for different parameterisations of the algorithms. These include anomalous pixel maps at given detection rates and receiver operating characteristic curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the literature of spectral unmixing (SU), particularly for remote sensing applications, there are claims that both geometric and statistical techniques using independency as cost functions1-4, are very applicable for analysing hyperspectral imagery. These claims are vigorously examined and verified in this paper, using sets of simulated and real data. The objective is to study how effective these two SU approaches are with respected to the modality and independency of the source data. The data sets are carefully designed such that only one parameter is varied at a time. The 'goodness' of the unmixed result is judged by using the well-known Amari index (AI), together with a 3D visualisation of the deduced simplex in eigenvector space. A total of seven different algorithms, of which one is geometric and the others are statistically independent based have been studied. Two of the statistical algorithms use non-negative constraint of modelling errors (NMF & NNICA) as cost functions and the other four employ the independent component analysis (ICA) principle to minimise mutual information (MI) as the objective function. The result has shown that, the ICA based statistical technique is very effective to find the correct endmember (EM) even for the highly intermixed imagery, provided that the sources are completely independent. Modality of the data source is found to only have a second order impact on the unmixing capabilities of ICA based algorithms. All ICA based algorithms are seen to fail when the MI of sources are above 1, and the NMF type of algorithms are found even more sensitive to the dependency of sources. Typical independency of species found in the natural environment is in the range of 15-30. This indicates that, conventional statistical ICA and matrix factorisation (MF) techniques, are really not very suitable for the spectral unmixing of hyperspectral (HSI) data. Future work is proposed to investigate the idea of a dependent component clustering technique, a fused geometric and statistical approach, and couple these with a modification of the conventional ICA based algorithms to model the independency of the mixing, rather than the sources. This work formulates part of the research programme supported by the EMRS DTC established by the UK MOD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future targeting systems aim to extend the range of air-ground target search, acquisition, temporal tracking and identification exceeding those currently afforded by forward looking infrared sensors. One technology option that has the potential to fulfil this requirement is hyperspectral imaging. Therefore a solution to detection and identification at longer ranges is the fusion of data from broadband and hyperspectral sensors. QinetiQ, under the Data & Information Fusion Defense Technology Centre, aims to develop a fully integrated spatial/spectral and temporal target detection/ identification air- ground tracking environment. This will build upon current capabilities in target tracking, synthetic scene generation, sensor modelling, hyperspectral and broadband target detection and identification algorithms into a tool that can be used to evaluate data fusion architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dissemination of SF6 and tracking its dispersion in the atmosphere is a well-known technique used to predict how pollutant affects the environment. Remote thermal imaging of the atmospheric tracer plume is one of the methods employed to detect and track its dispersion. However, remote detection of SF6 plumes in a stable boundary layer of the atmosphere (SBL) with a multispectral infrared sensor is a challenging task. At SBL conditions the tracer cloud tends to disperse very slowly and therefore its temporal signature is well mixed with the natural temperature variations over the background scene. Furthermore, SBL conditions are frequent during nighttime when the thermal contrast between the air and the background scene is very low. In this article we propose an efficient method to overcome these difficulties. The local temperature variance of the clean background is compared to the variance measured at the same position during the cloud presence in the field of view. The local temperature variance is modified by passage of radiation through the absorbing cloud. The distinctive spectral signature of the atmospheric tracer is expressed in the relative strength of the different spectral band of the IR sensor. The proposed technique is demonstrated with actual data collected during field test in an urban area. Urban background is particularly suitable for applying this method due to its inherent large thermal variance consisted of buildings, streets, parks etc. We demonstrate the usefulness of this detection method for accurate quantitative estimation of the tracer cloud density and its form.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future targeting systems, for manned or unmanned combat aircraft, aim to provide increased mission success and platform survivability by successfully detecting and identifying even difficult targets at very long ranges. One of the key enabling technologies for such systems is robust automatic target identification (ATI), operating on high resolution electro-optic sensor imagery. QinetiQ have developed a real time ATI processor which will be demonstrated with infrared imagery from the Wescam MX15 in airborne trials in summer 2005. This paper describes some of the novel ATI algorithms, the challenges overcome to port the ATI from the laboratory onto a real time system and offers an assessment of likely airborne performance based on analysis of synthetic image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) and classification is a computationally demanding task, but with the recent increase in the computing power for industry standard FPGAs and DSPs it has become a feasible and very useful application in military sensing equipment. The ATR method presented here uses Zernike moments of binary representations of infra-red targets for the classification process. Zernike moments are known for their good image representation capabilities based on their orthogonality property. They are often used because the magnitude of the moments provides rotation and scale invariance. For the detection of the target candidates, a given region of interest (ROI) is searched for possible target signatures using a simple threshold segmentation. From the resulting binary objects, the biggest or center-most object can be selected. For this target, the minimum enclosing circle is determined using the bounding box found during the segmentation process. This minimum enclosing circle is scaled to the complex unit disk, where Zernike moments are defined. The moments up to order five are then computed directly from the binary image using a fast recursive algorithm. The resulting twelve-dimensional moment magnitude vector is then classified with a 1-NN algorithm, where a set of class templates has been pre-computed off-line for each class using a simulated annealing approach for cluster analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a novel way to analyze LADAR images and model its data. Having an aerial LADAR image as data source, our aim is to extract a parametric description of the ground of our scenario in order to discern between the data samples that belong to the ground and those that belong to vehicles, objects or clutter. Once the samples are divided, we process each of the objects to perform an early classification refering to the object type (vehicle, building or clutter). The final step of our method is to estimate the pose of the interesting objects by building its corresponding oriented 3D bounding box.
Our method uses robust statistics in order to extract proper descriptions of both the ground and the oriented bounding boxes of the objects. Specifically, we use two robust parameter estimators : The Least Median Squares and the Variable Bandwith Quick Maximum Density Power Estimator, depending on the percentage of outliers that may be present in the different steps of our approach. Our method is open and can also be used along with other approaches that focus on extracting 3D invariant features or enhanced by applying a recognition step with the aid of model databases and 3D registration algorithms, such as the ICP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a decision support system for ship identification is presented. The system receives as input a silhouette of the vessel to be identified, previously extracted from a side view of the object. This view could have been acquired with imaging sensors operating at different spectral ranges (CCD, FLIR, image intensifier). The input silhouette is preprocessed and compared to those stored in a database, retrieving a small number of potential matches ranked by their similarity to the target silhouette. This set of potential matches is presented to the system operator, who makes the final ship identification. This system makes use of an evolved version of the Curvature Scale Space (CSS) representation. In the proposed approach, it is curvature extrema, instead of zero crossings, that are tracked during silhouette evolution, hence improving robustness and enabling to cope successfully with cases where the standard CCS representation is found to be unstable. Also, the use of local curvature was replaced with the more robust concept of lobe concavity, with significant additional gains in performance. Experimental results on actual operational imagery prove the excellent performance and robustness of the developed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BAE SYSTEMS has developed a Low Cost Targeting System (LCTS) consisting of a FLIR for target detection, laser-illuminated, gated imaging for target identification, laser rangefinder and designator, GPS positioning, and auto-tracking capability within a small compact system size. This system has proven its ability to acquire targets, range and identify these targets, and designate or provide precise geo-location coordinates to these targets. The system is based upon BAE Systems proven micro-bolometer passive LWIR camera coupled with Intevac's new EBAPS camera. A dual wavelength diode pumped laser provides eyesafe ranging and target illumination, as well as designation; a custom detector module senses the return pulse for target ranging and to set the range gates for the gated camera. Intevac's camera is a CMOS based device with used selectable gate widths and can read at up to 28 frames/second when operated in VGA mode. The Transferred Electron photocathode enables high performance imaging in the SWIR band by enabling single photon detection at high quantum efficiency. Trials show that the current detectors offer complete extinction of signals outside of the gated range, thus, providing high resolution within the gated region. The images have shown high spatial resolution arising from the use of solid state focal plane array technology. Imagery has been collected in both the laboratory and the field to verify system performance during a variety of operating conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent progress on the development of a long-range, high-resolution 3D active imaging sensor is described. Diffraction limited angular resolution of 20μrad and sub-metre down range resolution are demonstrated at stand-off ranges of 8km. A scanned single pixel arrangement was employed using an all-fiber coherent lidar operating in a chirp-pulse-compression mode. The monostatic antenna had an aperture of 150mm and the image was built up using a piezoelectric tip/tilt stage positioned prior to the final expansion of the beam. Transmit/receive multiplexing was achieved with a fiber optic circulator. Examples of recently acquired images consisting of 150x150 pixels with 1000, 30cm range cells per pixel at a stand-off range 8km are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The DGA (Delegation Generale de l'Armement) is interested in the determination of sky-ground characteristics. In particuliar, optical clouds properties as well as land-surface and sea-surface temperatures must be determined with accuracy. To obtain a statistical description of the cloud properties, we have created a cloud database called SALIC (SAtellite-LIdar-Clouds). Two algorithms, one for sea-surface temperature and one for land-surface temperature were recently included in the database. Three different kinds of measurements are used to built up the database: radiosoundings, ground-based lidar measurements, and satellite data obtained from the radiometer AVHRR3 boarded on the NOAA-16 polar orbiting satellite. This paper presents the results for a period covering two years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the potential of waveform digitizing scanning lidars for two different applications: surveying applications and collision avoidance applications, both from airborne or terrestrial platforms. These two applications impose remarkably different requirements on the scanning lidar, which, nevertheless, can both be met with a similar hardware architecture by appropriately designing the processing algorithms for the digitized echo waveforms, taking into consideration fundamental parameters as laser beam geometry and scan speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.