This paper describes the feasibility and implementation of an In-drilling alignment method for an inertia navigation system based measurement while drilling system as applied to the vertical drilling process, where there is seldom direct access to a reference measurement due the communication network architecture and telemetry framework constraints. It involves the sequential measurement update of multiple nodes by the propagation of the reference measurement sequentially over nodes, each of which has an integrated inertia measurement unit sensor and runs the extended Kalman filter based signal processing unit, until the final measurement update of the main sensor node embedded in the borehole assembly is done. This is done particularly to keep track of the yaw angle position during the drilling process.
KEYWORDS: Cameras, Stars, Deconvolution, Sensors, Convolution, Signal processing, Image restoration, Time of flight cameras, Phase shifts, Time metrology
Time-of-Flight cameras have become one of the most widely-spread low-cost 3D-sensing devices. Most of them do not actually measure the time the light needs to hit an object and come back to the camera, but the difference of phase with respect to a reference signal. This requires special pixels with complex spatial structure, such as PMD pixels, able to sample the cross-correlation function between the incoming signal, reflected by the scene, and the reference signal. The complex structure, together with the presence of in-pixel electronics and the need for a compact readout circuitry for both pixel channels, suggests that systematic crosstalk effects will come up in this kind of devices. For the first time, we take profit of recent results on subpixel spatial responses of PMD pixels to detect and characterize crosstalk occurrences. Well-defined crosstalk patterns have been identified and quantitatively characterized through integration of the inter-pixel spatial response over each sensitive area. We cast the crosstalk problem into an image convolution and provide deconvolution kernels for cleaning PMD raw images from crosstalk. Experiments on real PMD raw images show that our results can be used to undo the lowpass filtering caused by crosstalk in high contrast image areas. The application of our kernels to undo crosstalk effects leads to reductions of the depth RMSE up to 50% in critical areas.
This paper first describes the innovative topology and structure of a wireless ad hoc and sensor network in a so called
line-in-the-underground formation and the feasibility of achieving a reliable wireless connection underground with
regards to a borehole telemetry system. It further describes a routing algorithm/protocol implementation based on a
modification of the ad hoc on-demand distance vector protocol to achieve a reliable underground communication scheme
for the wireless ad hoc network deployed underground for sensor data acquisition in real time as applied in the borehole
telemetry system. Simulations and experiments are conducted to investigate and verify the effectiveness of this routing
technique and the performance results are shown.
KEYWORDS: Sensors, Target detection, Detection and tracking algorithms, Wavelets, Sensor networks, Denoising, Signal processing, Data acquisition, Data processing, Digital signal processing
Target detection and classification is very crucial in wireless sensor network for outdoor security applications. This paper
presents a novel concept to detect and further discriminate dynamic objects (persons and vehicles) in WSN using
geophones that work as dynamic presence detectors in a given field of interest. The basic concept to detect and classify a
target lies in the idea to consider an individual footstep of a person and/or motion of a vehicle detects as an event in the
detection range of the geophone. In an on-going project, the design of a wireless geophone sensor node is implemented.
This design is based on high processing Gumstix Overo Fire Computer-On-Module that is used for high performance
and has built-in wireless capabilities. A high resolution analog-to-digital converter is integrated to the module to acquire
data from geophone. The raw data is processed on the sensor node to detect and classify a target. The adaptive wavelet
denoising algorithm is applied in real-time to extract a target signal from the real noisy environments. This algorithm
adjusts the threshold based on the energy of the wavelet series coefficients. Timestamps of the events are extracted using
event detection method. These events are used to classify a target on a node level.
Spatially distributed network of sensor nodes with onboard seismic and acoustic sensors is an important class of
emerging networked systems for various security applications. One of a main task in these applications is target
detection. In order to achieve this with improved accuracy, it is necessary that sensors process and share information
efficiently. This paper presents a novel approach to fuse the data of acoustic and seismic sensors based on correlation
measures so that high detection range and/or detection rate can be achieved. This method calculates the weighted values
of both the sensors and the values of their weights are adjusted as the change of correlation measures is observed. It
gives greater weighted value to the greater correlation measure of the sensor signals, and vice versa. One of the
advantages of this method is that the weights of the sensors are adjusted dynamically for real-time data. The method does
not depend on prior information of the sensor's data. This method considers the limited range of both the acoustic and
the seismic sensors and fuses the signals in terms of maximum possible detection range. In case of failure of one of the
sensor, the method still provides the target information.
This paper describes results of a tool development process and still on-going research program in multilevel algorithm design and validation for multisensor systems. Our work covers both the algorithm design process including the simulation efforts and the implementation of the algorithms into the sensor node prepared for real-time processing within a vision system network.
The sensor node hardware is based on the System on Programmable Chip (SOPC)-Technology. This gives us the flexibility to interface different kinds of sensor elements (matrix-, line-sensors) and the processing-power to provide real-time possibilities in height data-rate applications. At the point today our hardware module for Vision applications also uses a CPU module. This results in a high flexibility concerning the communication efforts. Our design can provide the use of CPU-modules within the SOPC design also.
Mapping algorithms into a distributed sensor network will be done in either a centralized or decentralized way. That means the algorithm will be running on one sensor node or a part of the algorithm is implemented on some others nodes within the network. Beginning at the design and simulation level different kind of levels are opened for optimization, test and validate the developed algorithm.
In the center for sensor system at the polytechnic university of Siegen an image processing system for the measurement of torsions of special section tubes has been developed. Especially considering the tube geometry and the areas of the profiles used for the determination of torsions, an angular resolution below 0.25 degrees results. The measuring system is applied within the scope of quality control.
In this paper we would like to present a new idea for measuring high precision tubes with an accuracy of +/- 10 micrometers . The most interesting values of the tubes are wall thickness, diameter, eccentricity and so on. The cycle of measuring is not more than 5 seconds inclusive the complete tube handling. The preferred sensor concept is the triangulation technique with a matched sensor head.
Within the scope of an extended monitoring of the production a 100%-control of the workpiece is absolutely important. Especially the measurement of inside contours in small drillings and hollows is a problem in various industrial areas. Dimensional reasons are already enough to exclude most of the tried and tested methods for the inside measurement in the workpiece control.
Introduction of a compact sensor system to detect abnormalities on high graded, polished surfaces in production process. Usable for TQM in line of coating quality of lenses, glass plates, wafers or other high quality products. Optimized for non destructive, high speed scanning (2.5 m/s) of transparent materials with a low reflection rate and a resolution down to some micrometers 's. Reachable even in a noisy industrial environment. Available in a 19' rack with profibus data-link.
Presently there is still a remarkable gap between the requirements and the capabilities of 3D- vision in the field of industrial automation, especially in manufacture integrated 100%-quality control. For these and a lot of other applications like security and traffic control a new extremely fast, precise and flexible 3D-camera concept is presented in this paper. In order to obtain the geometrical 3D information, the whole 3D object or 3D scene is illuminated simultaneously by means of rf-modulated light. This is realized by using optical modulators such as Pockels cells or FTR optical components (FTR: Frustrated Total Reflection). The back scattered light represents the depth information within the local delay of the phase front of the rf-modulated light intensity. If the reflected wave front is mixed again within the whole receiving aperture using the same optical 2D-modulation components and the same rf- frequency, an rf-interference pattern is produced. A CCD camera may be applied to sample these rf-modulation interferograms. In order to reconstruct the 3D-image a minimum of three independent interferograms have to be evaluated. They may be produced either by applying three different rf-phases or three different rf-frequencies. This procedure will be able to deliver up to some tens of high resolution 3D images per second with some hundred thousand voxels (volume elements). Such a remarkable progress can be achieved by means of three key important steps: Firstly by separating the opto-electronic receiver device from real-time requirements by using homodyne mixing of CW-modulated light. Secondly by applying the rf- modulation signal as an optical reference signal to the receiving optical mixer. And thirdly by using a throughout 2D layout of the transmitted illumination, further, of the optical mixer in the receiving aperture, and of the optoelectronic sensing element, e.g., a CCD-chip.
The paper addresses multi-sensor data fusion for the navigation of a 4 wheel vehicle with two driven wheels. The main advantage of such a configuration is its flexibility concerning free motion and navigation, this advantage is paid for, however, with an increased complexity concerning the dynamic model of the vehicle. The basic sensors of the vehicle comprise a fiber optical gyro, continuously delivering angular orientation information (namely the angular velocity), a landmark sensor, delivering global position information at those instants where a landmark is available and within the reach of the sensor. Eventually an undriven and therefore not subject to slippage measuring wheel can be added. The control inputs to the vehicle are taken to be noisy but nominally known but subject to errors due to measuring errors and unknown influences. The approach taken in the paper essentially uses Kalman filtering ideas namely extended Kalman filtering to implement multi model filtering. The Kalman filter incorporates the different noisy measurements in order to `fuse' them to one precise position and orientation estimate, copes with the only temporary available global information, and automatically realizes dead-reckoning where no global information is available. The paper covers the state space formulation of the problem, discuses the different models needed to describe the different motions. Based on a realistic state space model the corresponding Kalman filter is designed and tested with simulated measurement data delivered by a truth model simulator.
KEYWORDS: Imaging systems, Cameras, 3D metrology, Laser range finders, Stereoscopic cameras, 3D vision, Interfaces, Control systems design, Sensors, Data processing
Basically the Multivision System consists of two different sensor systems combined to a multisensor system via the data processing. The first one is a 2D-picture processing system, where as the second one is the 3D Laser Range Finder module. In view of the X/Y-scanner this module feeds digital picture data--information on the object's position in terms of its Z axis and its tilt and turn angles--to the interface. The Laser Range Finder provides absolute range values at the interface and the laser spot on the surface of the measured object will be detected by the camera system. Any information of a scene provided by the camera system (e.g. edge-detection, edge-description) can be used to control the laser spot in view of the 3D measurement. Thereby the location of a scan point, which is measured with the laser scanner, can be transformed into the camera system, so that the position in the camera image is calculable. An easy way to describe the geometric information of such points is the use of coordinate systems. Therefore the multisensor system is molded by a set of different coordinate systems: the scanner-, the camera- and the cartesian transfer coordinate system. The paper deals with geometric modelling, the control system architecture as well as the practical system design and some accuracy considerations. The first application focused within this paper are navigation of autonomous vehicles and obstacle detection in such an environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.