PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Dimensional control by artificial vision is becoming a standard tool for industrialists interested in such remote and without contact measurement methods. The expected precision of those systems is largely dependent on camera resolution and high precision requires very costly CCD sensor and frame grabber. A method is proposed which tends to increase significantly the precision of dimensional measurements without increasing the hardware complexity. This algorithm is also quite robust against noisy images such that it could be encountered in real world imaging, a precision of 1/16 pixel can easily be obtained with SNR equals 2 dB. Dimensional control by artificial vision generally involves an edge detection stage in its process, it is this step that we propose to improve. A lot of edge detection techniques with pixel resolution are well known and some of them are designed in order to be robust against image corruption. On the other hand B-spline interpolation methods have been considerably improved and popularized by the signal processing techniques proposed by M. Unser and Al. An algorithm resulting from the merging of these two ideas is proposed in this paper. In this algorithm, the interpolation is prepared by an optimized filtering and by a detection of local maxima of gradient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera based probes and machine vision have found increased use in coordinate measuring machines over the past years and the calibration of artifacts for these probes has become an important task for NIST. Until recently these artifacts have been calibrated using one or two dimensional measuring machines with electro-optic microscopes or scanning devices as probes. These sensors evaluate only a small section of the edge of a grid mark, and irregularities in this particular spot from local deformations or contamination influence the measurement result. Since these measurements result in a single number based on the entire field of view, the influence of small irregularities are not easily detected. Since different probes scan different parts of the grid mark edge they may give systematically different positions of the mark. The conversion to video based sensors has allowed more flexibility it edge detection, although most instruments still use least squares fits as the substitute geometry of straight edges. This method is very susceptible to noise and edge irregularities. We present some experiments for finding the sub-pixel edge point locations and fitting the set of edge points to a line using a fairly simple least sum of absolute deviations fit. Data from a high accuracy 2D measuring machine is used to show the strengths of the algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manufacturing needs in many industries, especially the aerospace and the automotive, involve CAD remodeling of manufactured free-form parts using NURBS. This is typically performed as part of 'first article inspection' or 'closing the design loop.' The reconstructed model must satisfy requirements such as accuracy, compatibility with the original CAD model and adherence to various constraints. The paper outlines a methodology for realizing this task. Efficiency and quality of the results are achieved by utilizing the nominal CAD model. It is argued that measurement and remodeling steps are equally important. We explain how the measurement was optimized in terms of accuracy, point distribution and measuring speed using a CMM. Remodeling steps include registration, data segmentation, parameterization and surface fitting. Enforcement of constraints such as continuity was performed as part of the surface fitting process. It was found necessary that the relevant algorithms are able to perform in the presence of measurement noise, while making no special assumptions about regularity of data distribution. In order to deal with real life situations, a number of supporting functions for geometric modeling were required and these are described. The presented methodology was applied using real aeroengine parts and the experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high performance, high sensitivity, CCD line scan camera for use in machine vision systems is presented. The camera incorporates an on-board micro-controller as well as PLD's (programmable logic devices) that allow computer control of image acquisition, image processing, and image analysis. The micro-controller/PLD combination provide embedded image processing/analysis capability whereby data compression can be achieved thus reducing the system hardware requirements. Users have control over algorithm parameters thus allowing for dynamic changes in the inspection target. Algorithms and micro-controller firmware are completely in-system programmable via a serial communications link. Static and adaptive gray scale thresholding algorithms are presented as well as a sample application where a maximum of twenty cameras can be networked together to a single host computer. Applications for the camera include web inspection, parts inspection, template matching and gauging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a benchmark for vision systems performance prediction, estimation and evaluation. The benchmark is based upon image cooperative edge-region segmentation task. It is composed by three algorithms: multi-thresholded image connected component labelling, image region segmentation and data reorganization. Formalized algorithms are given and their few implementation details on SIMD and MIMD computers are discussed. Benchmark result analysis and architectural implications for an efficient support of vision application are proposed as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A specific configuration for liquid flow metrology consists of a flow of falling drops coupled with a preferred measuring method that derives the flow directly from the drop count. Given the inaccuracy of this counting method, alternative methods have been proposed that measure the volume of each falling drop. The principle consists in deriving the volume from geometric measurements obtained by vision and the basic problem can be described as the estimation of the volume of a drop from its projection. This paper reviews methods previously used and provides an analysis of qualitative and quantitative aspects of drop volume estimation for flow metrology. Three drop shape models and the related volume estimation methods are defined in a first part. A second part is devoted to an experimental analysis of drop shape variations. In a final experimental part, the presented methods are compared and the good performance of a volume measurement method is experimentally demonstrated. It shows a rms-error of 1% in normal measurement conditions. These figures speak for the interest of the measurement by vision and represent a good base for predicting the suitability of the method in various applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Indirect adaptive sampling techniques are introduced specifically for 3D inspection of sculptured surfaces (free form) normally found on objects produced by extrusion, die casting, and molding processes. The techniques successfully extends optimum 2D sampling methods to 3D applications. The modified 2-D adaptive sampling techniques are used sequentially twice. First, the critical cross sections are optimally selected, then each section is itself optimally sampled to develop an accurate description using a small number of sampling points. Optimizing view planning achieves the goal of minimum occlusion and minimum rotation to insure complete inspection of an object, and not only satisfies the goal of view planning, but also automatically maximizes the number of surfaces to which adaptive sampling can most fully be applied. The best view based on the number of visible faces and the face area has proven applicable to integrating the finite element (FEM) centroid sampling and indirect adaptive sampling techniques respectively, for the inspection of sculptured surface products. Experimental work has verified that the best view based on the number of visible faces not only maximizes the number of visible meshes for centroid sampling, but also reveals the maximum amount of high curvature regions of an object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of a laser range sensor in the 3D part digitalization process for inspection tasks allows very significant improvement in acquisition speed and in 3D measurement points density but does not equal the accuracy obtained with a coordinate measuring machine (CMM). Inspection consists in verifying the accuracy of a part related to a given set of tolerances. It is thus necessary that the 3D measurements be accurate. In the 3D capture of a part, several sources of error can alter the measured values. So, we have to find and model the most influent parameters affecting the accuracy of the range sensor in the digitalization process. This model is used to produce a sensing plan to acquire completely and accurately the geometry of a part. The sensing plan is composed of the set of viewpoints which defines the exact position and orientation of the camera relative to the part. The 3D cloud obtained from the sensing plan is registered with the CAD model of the part and then segmented according to the different surfaces. Segmentation results are used to check tolerances of the part. By using the noise model, we introduce a dispersion value for each 3D point acquired according to the sensing plan. This value of dispersion is shown as a weight factor in the inspection results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reconstruction of highly detailed, 3D object models is a major goal of current research. Such models can be used in machine vision applications as well as for visualization purposes. The method presented here assumes that there are multiple range and intensity image pairs of an object, all registered to a global coordinate system. The individual range images are then used to create a surface mesh and the associated intensity images are applied to the surface mesh as a texture map. These multiple, textured, range meshes are then used to update a volume grid -- based upon whether a location in the volume grid is known, unknown, or empty -- using information that has the highest confidence for any given voxel. The updated volume grid can then be passed through a marching cubes algorithm with adaptive subdivisions to get a fully textured 3D model. The adaptive marching cubes algorithm takes into account additional information concerning edge weights and texture coordinates to give a smoother surface than that produced with standard marching cubes. Once complete, additional, registered intensity images can be applied to the surface of the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 3D inspection applications, a round-view datacloud rather than range data is needed to access the dimensions of an industrial part. This paper discusses how to acquire the round-view datacloud of a part with a structured light machine vision (SLMV) scanner. The SLMV system consists of a line- structured laser, scanning means, image grabber and computer. In this scanning system, the part to be inspected is held on a turntable to sequentially expose different sides of the part to the scanner. For each side off the part, range data is found by triangulation means. Combining range data captured from different sides into a single composite produces a more complete datacloud description of the part's surfaces. Many more dimensions of a typical part can be inspected by analyzing the composite datacloud. The scanning process is divided into two phases: part rotation and surface scanning. Once the turntable is rotated to a specified position, the laser scans the part surfaces available in that position. Data points captured from different positions are merged into one composite datacloud according to a previously found rotational center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows an approach to automatically configured a system for texture analysis. It is examined, how each of the four modules preprocessing, feature extraction, training and classification can be improved. The involved methods for optimization are deterministic selection tools and genetic algorithms. Four different sample sets are used in order to test the proposed methods. It turns out that the greatest decrease in error rate can be reached by optimizing the module feature extraction. Thus the error rate of the classification system can be decreased by approximately 40%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection probability is an extremely important performance metric for Automated Inspection (AI) systems. Using the detection of false connection points in patterned images as an example, this paper presents a novel method to estimate the detection capability of an AI system. One is concerned with how wide of a false connection can be reliably detected by a given AI system. One possible approach for evaluating detection probability is to compare automatic detection results with the results from manual human inspection. Unfortunately, this method is tedious, time consuming, and inspector-dependent. Moreover, an inspector's tiredness or oversight easily results in missing detection. In this paper, the Modulation Transfer Function (MTF) is used to determine the functional resolution of the system and generate theoretical profiles around false connection defects. The algorithm used for detecting the defects contains an auto- thresholding method for binarization. The statistical properties of these thresholds can be derived from the on-line record of thresholds of the system and essentially determine the detection results. Based on the statistical properties of the thresholds and their bounds, as well as the shapes of theoretical profiles, the detection probability of the AI system is evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and classification of faults is a major task for optical nondestructive testing in industrial quality control. Interferometric fringes, obtained by real-time optical measurement methods, contain a large amount of image data with information about possible defect features. This mass of data must be reduced for further evaluation. One possible way is the filtering of these images applying the adaptive wavelet transform, which has been proved to be a capable tool in the detection of structures with definite spatial resolution. In this paper we show the extraction and classification of disturbances in interferometric fringe patterns, the application of several wavelet functions with different parameters for the detection of faults, and the combination of wavelet filters for fault classification. Examples for fringe patterns of known and varying fault parameters are processed showing the trend of the extracted features in order to draw conclusions concerning the relation between the feature, the filter parameter, and the fault attributes. Real-time processing was achieved by importing video sequences in a hybrid opto-electronic system with digital image processing and an optical correlation module. The optical correlator system is based on liquid-crystal spatial light modulators, which are addressed with image and filter data. Results of digital simulation and optical realization are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality control by artificial vision is getting more and more widespread within the industry. Indeed, in many cases, industrial applications require a control with high stability performance, satisfying high production rate. For texture control, some major problems may occur: uneasiness to show different textures, segmentation features as well as classification and decision phases requiring still to much computation time. This article presents a comparison between two non-parametric classification methods used for real time control of textured objects moving at a cadence of 10 pieces per second. Four types of flaws have to be indifferently detected: smooth surfaces, bumps, hollow knocked surfaces and lacks of material. These defects generate texture variations which have to be detected and sorted out by our system, each flaw apparition being registered to carry out a survey over the production cycle. We previously presented a search for an optimal lighting system, in this case the acquired images were tremendously improved. On these optimal images, we described a method for selecting the best segmentation features. The third step, which is presented here, is a comparison between two multi-classes classification algorithms: the Parzen's estimator and the so-called 'stressed polytopes' method. These two algorithms which require a learning phase are both based on a non-parametric discrimination method of the flaw classes. In one hand, they are both relatively inexpensive in time calculation but on the other hand they present different assets relative to the easiness of the learning phase or the number of useable segmentation features. They also have a different behavior towards the cut out of the features space, especially on the 'cross-classes' border. Their comparison is made through the aforementioned quoted points which are relevant for the evaluation of the discrimination efficiency. Finally, through an industrial example we present the results of such a comparison. The control, a PC based machine, includes the calculation five classification features (calculations were carried out on the local neighborhood of each pixel), five distinct classes for the classification phase and the decision phase. This led to a 3,63% classification error ratio for the best compromise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train automatic defect classification systems, and examine historical data for trends. Image management in semiconductor yield management systems is a growing cause of concern since many facilities collect 3000 to 5000 images each month, with future estimates of 12,000 to 20,000. Engineers at Oak Ridge National Laboratory (ORNL) have developed a semiconductor- specific content-based image retrieval architecture, also known as Automated Image Retrieval (AIR). We review the AIR system approach including the application environment as well as details on image interpretation for content-based image retrieval. We discuss the software architecture that has been designed for flexibility and applicability to a variety of implementation schemes in the fabrication environment. We next describe details of the system implementation including image processing and preparation, database indexing, and image retrieval. The image processing and preparation discussion includes a description of an image processing algorithm which enables a more accurate description of the semiconductor substrate (non-defect area). We also describe the features used that identify the key areas of the defect imagery. The feature indexing mechanisms are described next, including their implementation in a commercial database. Next, the retrieval process is described, including query image processing. Feedback mechanisms, which direct the retrieval mechanism to favor specified retrieval results, are also discussed. Finally, experimental results are shown with a database of over 10,000 images obtained from various semiconductor manufacturing facilities. These results include subjective measures of system performance and timing details for our implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A True Color Tube Bore Inspection System (TCTBIS) has been developed to aid in the visual nondestructive examination of the inside surfaces of small bore stainless steel tubes. The instrument was developed to inspect for the presence of contaminants and oxidation on the inner surfaces of these 1.5 to 1.7 millimeter inside diameter tubes. Previously a parameter called the color factor, which can be calculated from the images collected by the TCTBIS, was found to be a good measure of the surface quality in these tubes. The color factor is a global number in the sense that it is calculated for the entire inspection region. Additional algorithms have also been developed to evaluate the tube based on surface inhomogeneities that are indicative of the presence of foreign matter, local chemical attack or other undesirable but localized conditions. These algorithms have been incorporated into an up-to-date apparatus which is described in detail. We have also investigated the feasibility of using artificial intelligence techniques to aid in the interpretation of these defects. Promising results were obtained with a feed forward, back propagation artificial neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is known that the transformation of RGB color space to the normalized color space is invariant to changes in the scene geometry. The transformation to the hue color space is additionally invariant to highlights. However, due to sensor noise, the transforms become unstable at many RGB values. This effect is usually overcome by ad hoc thresholding, for example if the RGB coordinates are located near the achromatic axis then the corresponding hue value is rejected. To arrive at a principled way to deal with the unstabilities that result from these color space transforms, the contribution of this report is as follows. Uncertainties in the measured RGB values are caused by photon noise, which arises from the statistical nature of photon production. Using a theoretical camera model, we determine the number of photons required to cause a color value transition. Based on the associated uncertainty according to the Poisson distribution, we then derive theoretical models that propagate this uncertainty to the uncertainty in the transformed color coordinates. We propose a histogram construction method based on Parzen estimators that incorporates this theoretical reliability. As a result, we overcome the need for thresholding of the transformed color values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine Vision Systems Integration and Process Characterization
A PC based machine vision system is described, designed for precise positioning and reliable recognition of gas oil filters. The system has been integrated into the production line, which is capable of assembling several types of filters, each one having its own visual appearance. Our primary goal was to design a flexible system, which could be easily adapted for assembling different filter types. To achieve this an appearance based method, employing the Karhunen-Loeve expansion, was used. Based on this method, the most significant visual information is automatically extracted from a set of rotated filter images, i.e. templates, and described by a small number of eigenimages. The eigenimages constitute the eigenspace. These templates and the captured image of the filter in an unknown position are projected to the eigenspace. The distances between the projected templates and the projected filter image are computed. Based on these distances, the filter position and its type are determined and the filter rotated. The system operates in a closed loop, therefore the new position can be evaluated and corrected, as required. The results obtained so far show that the system works reliably, and meets the required accuracy and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We herein present a paper dedicated to machine vision systems used in the metallurgy industry for process control. Various systems, aim at performing on-line process control, are presented. The present study is mainly focused on the control of surface treatment processes. Control of processes such as HF welding and laser cladding are shown. Surface temperature being often a crucial point in the process control, monochromatic and dual-wavelengths non-contact temperature methods are introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An in-line, non-destructive process is being developed for characterizing polycrystalline thin-film and other large area electronic devices using computer vision based imaging of the manufacturing and inspection steps during the device fabrication process. This process is being applied specifically to Cadmium Telluride/Cadmium Sulfide (CdTe/CdS) thin film, polycrystalline solar cells. Our process involves the acquisition of reflective, transmission and electroluminescence (EL) intensity images for each device. The EL intensity images have been processed by use of a modified median cut segmentation. The processed images reveal different gray level regions corresponding to different intensities of EL originating from radiative recombination events occurring within a biased solar cell. Higher efficiency devices show a more uniform intensity distribution in contrast with lower efficiency devices. The uniform intensity regions are made up of gray level intensity values found near the mean of the histogram distribution these are identified as regions of good device performance and are attributed to better material quality and processing. Low intensity regions indicate either material defects or errors in processing. This novel characterization process and analysis are providing new insights into the causes of poor performance in CdTe-based solar cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the automotive industry, a vehicle begins with the construction of the vehicle floor. Later on, several robots weld a series of bolts to this floor which are used to fix other parts. Due to several problems, like welding tools wearing, robot miscalibration or momentary low power supply, among others, some bolts are incorrectly positioned or are not present at all, bringing problems and delays in the next work cells. Therefore, it is of importance to verify the quality of welded parts before the following assembly steps. A computer vision system is proposed in order to locate autonomously the presence and quality of the bolts. The system should carry on the inspection in real time at the car assembly line under the following conditions: without touching the bodywork, with a precision in the submillimeter range and in few seconds. In this paper we present a basic computer vision system for bolt location in the submillimeter range. We analyze three arrangements of the system components (camera and illumination sources) that produce different results in the localization. Results are presented and compared for the three approaches obtained under laboratory conditions. The algorithms were tested in the assembling line. Variations up to one millimeter in the welded position of the bolts were observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits to accommodate a broad range of inspection requirements, is however limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented, considering the particular case of the visual inspection of markings found on top of molded integrated circuits. These results show the effectiveness of the presented objective functions and search methods, and validate the configuration assistant as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The American textile industry has lost an estimated 400,000 jobs to offshore competitors since 1980. It is predicted they will lose an additional 600,000 jobs by the year 2002. These losses and their resulting economic threat to the U.S. textile industry can be attributed to the low operating costs of their offshore competition. In order to stem these rising losses, the American textile industry entered into an agreement with the U.S. Department of Energy (DOE) in a program called the American Textile Partnership (AMTEXTM). Since the minimum U.S. labor rate is well above that of its offshore competitors, one of the competitive factors the U.S. industry hopes to gain is a higher quality fabric. To facilitate this, a Computer-Aided Fabric Evaluation (CAFE) System has been developed at Oak Ridge National Laboratory (ORNL) and Lockheed Martin Energy Systems, Inc. (LMES). The system is based on a class 3-a laser and a set of cylindrical lenses allowing for 1-D imaging of single yarns thrown in the fill direction. It has been designed to be located close to the point of fabric formation providing data and information on structure, patterns, and material defects of the fabric as it is being formed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic classification of defective eggs constitutes a fundamental issue at the poultry industry for both economical and sanitary reasons. The early separation of eggs with spots and cracks is a relevant task as the stains can leak while progressing on the conveyor-belts, degrading all the mechanical parts. Present work is focused on the implementation of an artificial vision system for detecting in real time defective eggs at the poultry farm. First step of the algorithmic process is devoted to the detection of the egg shape to fix the region of interest. A color processing is then performed only on the eggshell to obtain an image segmentation that allows the discrimination of defective eggs from clean ones in critic time. The results are presented to demonstrate the validity of the proposed visual process on a wide sample of both defective and non-defective eggs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel concept of object-oriented vision-based recognizing objects of various shapes is introduced. It can be used for a vision-guided manipulator gasping objects without quantitative modeling of the robot and the optical system. The object detection from the background and other irrelevant image information is achieved by observing direct the object appearance in real-time images. By this approach, coordinate transformations and reconstructions of objects are avoided; instead, image data are used directly to control the behavior of the robot, or the interactions of the robot with physical objects. The approach was evaluated and demonstrated in real- word experiments on a vision-guided calibration-free manipulator with five degrees of freedom (DOF) for recognizing and grasping a variety of differently shaped objects in nearly arbitrary orientations and positions anywhere in the robot's 3-D work space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the paper, an on-line detecting system for ships in lock is presented. The system has three functions: (1) judging whether there are ships in lock in time; (2) detecting whether ships cross the forbidden lines in lock; (3) observing whether there are ships out of lock. First, a Gaussian probability distribution model (probability field) is generated from the gray-level histogram to diminish the affection of shades of buildings and white speckles by bright light. Second a fundamental optical flow method based on spatio-temporal intensity derivatives is used to calculate normal velocity field of image sequences. Third, a probability velocity field is defined and generated by combining the velocity field with probability field. Two calculating methods about probability velocity field are presented, one method is multiplying probability value with magnitude of velocity field, and the other is using probability field to compute the velocity field. Finally, a movement block detection method is designed according to probability velocity field. And the method not only detects the size and position of movement blocks, but also obtains the direction of movement blocks. These methods have been tested and installed successfully in the temporary ship lock of the Three Gorges Project (TGP).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many industrial inspection such visual quality control of metal laminates require billions of operations per second. Analog CNN array computer arises as an alternative to traditional digital processors, capable of make in a single chip Tera equivalent operations per second. A 4096 analog CNN processor array is able to perform complex space-time image analysis, being much faster than a camera-computer system in continuous inspection applications. Both chips have been implemented in CMOS technology and they are managed by a 32- bit high-performance low-cost micro-controller that closes the pan, tilt, lighting, focus and zoom loops required in the implementation of the active vision strategies. Several convolution masks for the Cellular Processors has been selected to detect particular changes in the texture, size, direction or orientation of the image entities, reprogramming 'on the fly' the pixel resolution or shape when necessary. Laboratory results present these Cellular Processors and multiple resolution imager circuits as a promising architecture for visual inspection of industrial processes in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Erosion and dilation are two basic morphological filters and have been widely used in both academic and industrial fields. When they are used in industry, such as automated visual inspection, their implementation cost especially for large masks is a challenging issue. In this paper, we propose a FDT (form distance transform) method for implementing erosion and dilation for some regular shapes. In this proposed method, the implementation of erosion and dilation is first converted into the computation of its FDT. Then a propagation technique is used to compute the FDT. The computational cost of the new method is independent of mask sizes. In contrast of the direct implementation, if the pixel number in a morphological mask is N, the proposed method reduces the implementation cost from O(N) to O(1).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Texture analysis is an important generic research area of machine vision for detection of patterns in two-dimensional data representations. Despite the wide array of potential areas of application for texture analysis, only a limited number of successful exploitations of texture exist so far since most reported techniques lack the computational tractability required in industry. Neutral network based classifiers have also been proposed for texture recognition. Recent studies of the visual cortex of the cat highlight the role of temporal processing using synchronous oscillations for object identification. In this paper, the original Eckhorn's neural model is modified according to Johnson for texture classification and analysis. A two-dimensional texture image can be mapped into a one-dimensional output function time signature. Each time signature in the form of 8-bit gray level images are further presented to a second PCNN to produce binary barcodes. There is a one-to-one correspondence between these barcoded PCNN outputs and the corresponding input images. The effectiveness of this novel method is demonstrated using 50 textures taken from Brodatz texture album. An n-tuple (RAM-based) neural network is finally used for recognition. Our test results demonstrate that the approach is fast and robust making it suitable for real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional inspections of road surfaces for the condition assessment and for locating cracks are time-consuming, expensive and can prove to be dangerous. What is ideally required would be a fully equipped automated inspecting vehicle capable of high precision location and characterization of road surface cracks over the width of the road (single pass). We propose an automatic crack monitoring system (akin to HARRIS - UK) with the video-based subsystem substituted by Global Positioning Systems for more accurate positioning. Besides, our technique avoids the storage of large volumes of scanned images of 'acceptable' road surface conditions. A pulse coupled neural network (PCNN) is used as a preprocessor for each scanned image to detect cracks while another PCNN segments this image to characterize identified defects. The latter image is then stored as binary image along with the GPS data. The type of cracks is later identified (offline) from the recorded binary images. This mode of data collection leads to a more accurate, less costly and faster automated system. Our results for road surface (concrete and bituminous) images reveal the suitability of this novel technique for a fully automated road inspection system for crack identification and characterization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface Characterization for Computer Disks, Wafers, and Flat Panel Displays
Reduction of the thickness of the diamond-like-carbon (DLC) overcoat deposited on media and head, plays an important role in enhancing areal density. This is because the DLC layer contributes directly to the spacing between media and head and increases in areal density are achieved by reducing this spacing. In fact, nowadays DLC thicknesses of the order of 50 Angstrom are required. With such ultra-thin DLC overcoats, quick and accurate thickness measurements are becoming a must. In this article, an optical technique for measuring the DLC thickness, rapidly and nondestructively is presented. The technique, termed the 'n&k Method,' is based on broad band reflectance spectrophotometry and Forouhi-Bloomer dispersion equations in the data analysis. Results for samples with DLC thicknesses ranging from approximately 25 Angstrom to 300 Angstrom are given. In addition, a typical uniformity map of DLC on a magnetic disk is presented, whereby the thickness ranges from 46 Angstrom to 50 Angstrom, with a mean value of 50 Angstrom, and standard deviation of 2 Angstrom. The determined thicknesses obtained using the 'n&k Method' is compared with those from step height measurements using stylus and atomic force microscopy (AFM). The results are consistent within the measurement error and the optical measurement has by far much better precision and repeatability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Veeco Metrology has designed, built and installed at the California Institute of Technology an interferometer for testing long radius optics for LIGO (Laser Interferometer Gravitational-Wave Observatory). Its accuracy is better than (lambda) /100 P-V for focus and astigmatism coefficients and (lambda) /1000 RMS for the residual surface. Its repeatability is better than (lambda) /4000 RMS with retrace error below 6 nm P-V with 4 fringes of tilt. ROC (radius of curvature) measurement error is less than 3%. In this paper we outline the requirements for the interferometer and discuss more challenging aspects of both the optical design and the alignment. Some measurement results are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new instrument has recently been developed for characterizing various properties of thin film magnetic disks. Since defects on thin film disks can take many forms, it is useful to combine many different optical detection techniques into a single tool. This greatly improves the chances of detecting a wide variety of defects, both topographic and non- topographic. The ellipsometer part of the instrument is used to measure the lubricant, carbon layers and organic contamination on thin film disks. The reflectometer is used to detect scratches, pits, particles, and texture angle. The scatterometer is used to detect particles, corrosion, and surface roughness. The Kerr effect microscope detects magnetic patterns on thin film disks and can be used as a replacement for ferrofluid marking of magnetic defects. The instrument also incorporates a built-in precision diamond scribe for marking defects. The theory of operation and the design of this instrument will be discussed. Examples will be given of the different types of defects and thin films that can be detected on thin film disks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a polarization discriminating interferometer, where the test and reference beams are encoded in orthogonal coherent polarization states. The optical signal output from such an interferometer has a normalized degree of polarization (P') that varies monotonically as the optical path difference (OPD) between the test and reference paths is increased from zero. We analyze the interferometer output using a novel Stokes polarimeter, employing two switchable ferroelectric liquid crystal (FLC) wave plates and a polarization image splitter to effect the polarization transformations required for a full Stokes analysis. The addressing time of approximately 100 microseconds for the FLC waveplates, coupled with the image splitter, allows data to be collected in three video frames. Manufacturing tolerances inherent in the FLC waveplates, together with alignment errors in the optical system, lead to errors in the measurement of P'. We examine these errors and show that their result is to cause the relationship between surface height and measured P' to depart from the monotonic ideal form, thus reintroducing ambiguity into the measurement system. We present a numerical correction term which allows us to recover the correct P' value from the measured data, thereby returning us to an unambiguous surface profile. We will show profiles taken from surfaces with step discontinuities of several lambda, demonstrating the system's ability to resolve these height differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.