PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper we deal with the problem of matching and recognizing planar curves which are modeled as B-splines, independently of possible transformations that the original curve has been subjected to. Curve matching is achieved by using a finite set of B-spline moments which are compared to a set of B-spline prototype moments. When the observed curve is an unknown affine transformation (having four parameters) of one of the prototype curves, the affine parameters are estimated by relating the weighted moments of the original curve to that of the affine transformed curve. A set of up to a second order weighted B-spline moments are used toward that end, and results in a set of two single variable quadratic equations and two linear equations involving the four parameters of the linear transformation for which a closed form analytic expression exists. In the general linear case, the weight used is the affine length which weighs the moment integral by a kernel that results in having the affine parameters factoring out of the weighted moment integral. Once the transformation parameters are obtained, we undo the transformation, and use the set of third or fourth order B-spline moments for classification. The method is illustrated on classifying affine transformed silhouette of aircrafts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Both man-made and natural objects are described by both their geometric shapes and by their non-geometric attributes such as color. The objective of the proposed research is to create a system which integrates geometric and non-geometric attribute information for fast 3-D model-based object recognition. Hashing is employed in a hypothesis and verify approach to the 3-D model-based object recognition problem. Viewpoint independent attributes are used in the hypothesis generation stage to eliminate model objects from consideration during hypothesis formation. Utilizing more than one attribute in the proposed hashing scheme helps to ensure a reduction in the actual execution time for object recognition over a larger number of model-bases. A nice feature of the system is that new object attributes can be added with relative ease. Issues concerning the ranking of attributes by their distinctiveness with respect to the objects in the model-base are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to the quasi specular object picking task is described. Purposive and qualitative vision is used to detect the object to be picked. Surface reflectance properties and appropriate lighting directions are used to estimate the surface orientation. Extrinsic camera calibration parameters are used to compute the part attitude in a world reference system. A pneumatic gripper with proximity sensors on its extremities is used to effectively grasp the part. The acquisition sequence is driven by prior knowledge about the object to be grasped as well as by the reflectance properties of its surface. The purposive picking algorithm does not perform any 3-D reconstruction of the scene, but identifies the part to be picked as the part which is not occluded by any other part. Due to the purposive nature of the method no 3-D scene nor 3-D part modeling is required. Our method is characterized by having low sensitivity to image noise and being robust and applicable to a wide range of industrial plastic and metal objects whose bidirectional reflectance distribution function (BDRF) approximates an ideal specular reflector. Test results using a real framework with metallic hexagonal rods are described. Experiments with cylindrical, square, and generic polygon base rods are in process giving encouraging results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we consider two method for automatically determining values for thresholding edge maps. Rather than use statistical methods these methods are based on the figural properties of the edges. Two approaches are taken. We investigate applying an edge evaluation measure based on edge continuity and edge thinness to determine the threshold on edge strength. However, the technique is not valid when applied to edge detector outputs that are one-pixel wide. In this case, we use a measure based on work by Lowe for assessing edges. This measure is based on length and average strength of complete linked edge lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
General aspects of feature extraction and matching are addressed, which include optimal principles, similarity measures, constraints, and heuristics. The common characteristics of feature extraction and matching are summarized which show that they can be considered as special cases of signal detection. However, the existing signal detection theories do not solve these problems readily. Therefore, a general formulation of feature extraction and matching as a problem of signal detection is desired and presented. This formulation considers feature extraction and matching as similar, subsequent processes, which well integrates the two different processes together to form an automatic system for image matching or object recognition. Guidelines for designing algorithms for detection or matching of arbitrary image features or patterns are derived which can be easily reconfigured for many practical applications. Typical methods and the associated experiments with real image data are provided which demonstrate the superb performance of the methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lowe demonstrated a method for automatically segmenting and smoothing image curves by varying degrees. It was intended to remove noise and unnecessary fine detail, aiding subsequent processing such as grouping and matching. An alternative technique is described in this paper that is based on recursively subdividing the curve into alternative sets of sections. Rather than use thresholds on the values of curvature and its derivatives to determine the segmentation and degree of smoothing our technique is driven by three qualitative measures: (1) a criterion for selecting potential breakpoints, (2) a criterion for determining the amount of smoothing for curve sections, and (3) a significance measure that determines which sections form the best selection. The advantages of the technique are robustness, scale invariance, and the absence of parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time surface inspection for steel sheets is considered in this paper. Automatic defect detection and identification becomes necessary due to the fast production speed. After the industrial application description, the defect detection system based on linear cameras is described. At the moment, it provides an alarm when a defect is present. In a first step, texture classification based on co-occurrence matrices and neighboring gray level dependence matrices is used to discriminate different aspects of steel sheets. Specific image segmentation and feature extraction for the most usual texture are presented. Then features discrimination power is verified by descriptive techniques and classification methods based on neural networks and linear discrimination are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an image segmentation algorithm and the results obtained using a specially designed robotic head. The head consists of a camera and a laser range-finder mounted on a pan & tilt unit. Additional distance measuring capabilities, offered by the head, have been integrated into the segmentation process. The described method will be used for detecting visual landmarks by an autonomous mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes how an active eye can be controlled using a subsumption architecture, in order to achieve robustness and high reactiveness. The control is implemented according to the principles of the subsumption architectures, through incremental design and test of behaviors. A simulator of an active camera —whose objective is searching for long edges in a high resolution image —wasused to test the system. The structure of each component sub-system is detailed (and its effect on the behavior of the whole system is exhibited). The task of designing image sensors for active cameras is also discussed, considering the four different sensors implemented. Some good results for different types of images are exhibited, giving support to the use of behavior-based models in the attention systems of active cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of a robot head, Neuto, for active computer vision tasks is described. The head/eye platform uses a common elevation configuration and has four degrees-of-freedom. All joints are driven by dc servo motors coupled with incremental optical encoders and minimum backlash gear-boxes. Details of the mechanical design, head controller design, architecture of the active vision system, and the performance of the head are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a fast and robust artificial neural network algorithm for solving the stereo correspondence problem in binocular vision. In this algorithm, the stereo correspondence problem is modelled as a cost minimization problem where the cost is the value of the matching function between the edge pixels along the same epipolar line. A multiple-constraint energy minimization neural network is implemented for this matching process. This algorithm differs from previous works in that it integrates ordering and geometry constraints in addition to uniqueness, continuity, and epipolar line constraint into a neural network implementation. The processing procedures are similar to that of the human vision processes. The edge pixels are divided into different clusters according to their orientation and contrast polarity. The matching is performed only between the edge pixels in the same clusters and at the same epipolar line. By following the epipolar line, the ordering constraint (the left-right relation between pixels) can be specified easily without building extra relational graphs as in the earlier works. The algorithm thus assigns artificial neurons which follow the same order of the pixels along an epipolar line to represent the matching candidate pairs. The algorithm is discussed in detail and experimental results using real images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many tasks, such as pose determination, object recognition, and model building rely on a geometrical description of the visible surface derived from 3-D scattered measurements. Even though much effort has been invested in surface description, little attention has been paid to the invariant recovery of geometric information from an actual noisy 3-D signal. In this work, we argue that the local description of a section of a visible surface must be stable with the measurement set gathered from any of the different viewpoints in the scene. Stability can be achieved on sections where constraints are redundant with respect to a polynomial model. A segmentation approach is developed to identify such stable sections. The approach is based on a measurement error model which takes into account the sensor's viewpoint. The case of straight line extraction from 3-D single scan profiles of a surface is presented. Identified stable linear sections are stored in a graph that includes the estimated descriptive parameters for each section and indices of reliability for each description.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this research, we propose a novel algorithm which recovers the surface height directly from images obtained via perspective projection. Most existing shape from shading (SFS) algorithms are developed under the assumption of an orthogonal projection model, which is a special case of the general perspective projection model. As a result, there exist substantial reconstruction errors in their real applications especially for the case when the object is not far away from the camera. By introducing an explicit triangular element surface model and combining perspective imaging geometry, we are able to relax the orthogonal projection assumption and derive a more general reflectance model with perspective projection. We formulate the perspective SFS problem as a minimization problem parameterized by surface nodal heights, and determine the surface height by solving an equivalent linear system of equations. Simulation results for several test images are given to demonstrate the performance of the proposed algorithm. The proposed perspective SFS algorithm gives more accurate results than conventional SFS algorithms and is very attractive in practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes on-going research into machine vision systems based on the line-scan or linear array type cameras. Such devices have been used successfully in the production line environment, as the inherent movement within the manufacturing process can be utilized for image production. However, applications such as these have traditionally involved using the line-scan device in a purely two-dimensional role. Initial research was carried out to extend such 2-D arrangements into a 3-D system, retaining the lateral motion of the object with respect to the camera. The resulting stereoscopic camera allowed three-dimensional coordinate data to be extracted from a moving object volume (workspace). The most recent work has involved rotating line-scan systems in relation to a static scene. This allows images to be produced with fields of view varying in both size and position in the rotation. Due to the nature of the movement the images can be complex dependent on the size of the field of view selected. Benefits of obtaining images in this fashion include `all-round' observation, variable resolution in the movement axis, and a calibrated volume that can be moved to observe any point in a 360 degree arc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the fundamental problems of machine vision is the estimation of object depth from perceived images. This paper describes both an apparatus and the corresponding algorithms for the passive extraction of object depth. Here passive extraction implies the processing of images acquired using only the existing illumination, in this case uniform white light. Depth from defocus algorithms are extremely sensitive to image variations. Regularization, the application of a priori constraints, is employed to improve the accuracy of the range measurements. When the camera's point spread function is shift invariant, an adaptive algorithm is developed in the frequency domain. When the camera's point spread function is shift varying, an adaptive algorithm is developed in the spatial domain. Data is acquired from line scan cameras. Only a single range measurement or a single depth profile is extracted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a real time implementation of a monocular `eye in hand' approach to acquire 3-D information of a scene in a robotic environment. By tracking points through an image sequence taken from a moving camera, the correspondence and occlusion problem is solved. Methods are presented which render textured surfaces in an image and allow the dynamic selection of points in real time. To overcome the inaccurate knowledge of the exterior orientation of the camera (which is mounted on a robot), control points are placed in the scene and the exact exterior orientation is determined by means of resection. Two different methods of tackling 3-D data and first results are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work outlines an interactive approach to reconstruction of solid models from range images. It is to be integrated into a telerobotic system in order to dynamically create a coarse geometric world model suitable to support path planning and grasp planning tasks in a telerobotic environment. A solid is modelled by a collection of parametric surfaces known as superquadrics. Each superquadric model approximates the shape of one part of a solid object. Interactive techniques are discussed that allow the user to recover model parameters for multiple objects in a scene. A graph data structure called em scene graph is used for integrated storage of image data and knowledge supplied by the operator. After evaluation of the scene graph, model parameters are recovered by minimizing an error measure based on the distance between the range points and the corresponding locations on the model's surface. The Levenberg-Marquardt method is used to always ensure convergence of the solution. We implemented the minimization algorithm in parallel on a transputer network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a novel technique for rapidly partitioning surfaces in range images into planar patches. Essential for our segmentation method is the observation that in a scan line the points belonging to a planar surface form a straight line segment. Based on this observation, we first divide each scan line into straight line segments and subsequently consider only the set of line segments of all scan lines as segmentation primitives. The principle of our segmentation method is region growing in terms of line segments. We use a noise variance estimation to automatically set thresholds so that the algorithm can adapt to the noise conditions of different range images. The proposed algorithm has been tested on a large number of real range images acquired by two different range sensors. Experimental results show that the proposed algorithm is fast and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the implementation of an optical flow range-estimation algorithm, intended to support autonomous helicopter navigation, on the iWarp distributed-memory multicomputer. The implementation exploits numerous features of the iWarp communications and computation architecture to maximize performance and efficiency. The range-estimation algorithm has been previously implemented on both distributed and shared-memory workstations. Experiences and performance figures from the initial iWarp implementation are discussed along with plans for future refinements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the implementations of parallel programming tools, mathematical tools, and some signal and image processing applications on the Intel iWarp system. The paper starts with a discussion on parallel processing for signal and image processing applications. Some issues related to programming on the iWarp system are addressed. The paper presents implementations of efficient parallel programming tools on the iWarp system and discusses mapping applications to the iWarp system using these programming tools. The applications mapped to the iWarp system include a two-dimensional fast Fourier transform (2-D FFT), a few matrix computation algorithms, two low-level image processing schemes, and an acoustic signal processing algorithm. Performance results from our implementations are presented and analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a preliminary sensory system for real-time sensor-based navigation in a three-dimensional, dynamic environment. Data from a laser range camera are processed on an iWarp parallel computer to create a 3-D occupancy map. This map is rendered using raytracing. The construction and rendering consume less than 800 milliseconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Systems for high-precision control of the trajectory to be followed by a robot arm grip need to properly model the interaction among the robot arm joints and to cope with the high-speed and nonlinearities of the arm dynamics. To solve this problem the use of a hardware accelerator, which is able to explore parallelism within multivariable self-tuning control algorithms, is proposed. The accelerator works as part of an integrated system which incorporates facilities of computer vision and robot arm trajectory definition. The computer vision sub-system recognizes the position of an object selected to be picked by the robot arm and the trajectory definition sub-system uses a neural network to define the angular position of the joints along the trajectory to be followed by the arm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Animate vision depends on an ability to choose a region of the visual environment for task-specific processing. This processing may involve extraction of image features for object classification or identification, or it may involve extraction of viewpoint parameters, such as position, scale, and orientation, for guiding movement. It is the role of selective attention to choose the region to be processed in a task-dependent way. This paper describes a real-time implementation of a vision-robotics system that uses the location information provided by the attention mechanism to guide eye movements and arm movements in touching and manual tracking behaviors. The approach makes use of a 3-D retinocentric coordinate frame for representing position information, and differential kinematics for relating the eye and arm motor systems to this retinocentric sensory frame.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of determining the reliability of individual sensors in a multi-sensor robotic system in an unknown environment. A system which can determine the reliability of its sensors is in general a more robust system since it can wisely decide which sensors are most appropriate for a given task and can also determine whether sensor conflicts are the result of poorly performing sensors. This research focuses on sensor confidence, which we define as the trust placed in a sensor based on its performance or based upon how well the system judges it to perform. The sensor's performance is its execution of the act of sensing -- its response to stimulus from the environment. Based upon our work the overall system determines which sensors perform reliably; it does not attempt to make the sensors more reliable. In other words, we are trying to provide information to the system reflecting the degree to which the sensors can be believed, we are not trying to find ways to modify the sensors or their performance to make them more believable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a control architecture for real-time control of complex robotic systems. The modular integrated control architecture (MICA), which is actually two complementary control systems, recognizes and exploits the differences between asynchronous and synchronous control. The asynchronous control system simulates shared memory on a heterogeneous network. For control information, a portable event-scheme is used. This scheme provides consistent interprocess coordination among multiple tasks on a number of distributed systems. The machines in the network can vary with respect to their native operating systems and the internal representation of numbers they use. The synchronous control system is needed for tight real-time control of complex electromechanical systems such as robot manipulators, and the system uses multiple processors at a specified rate. Both the synchronous and asynchronous portions of MICA have been developed to be extremely modular. MICA presents a simple programming model to code developers and also considers the needs of system integrators and maintainers. MICA has been used successfully in a complex robotics project involving a mobile 7-degree-of-freedom manipulator in a heterogeneous network with a body of software totaling over 100,000 lines of code. MICA has also been used in another robotics system, controlling a commercial long-reach manipulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to calculate the inverse kinematic functions whose domain of solution is not limited to the manipulator's workspace is introduced. Using this approach every point in the Cartesian space leads to a real solution when used as data for the inverse kinematic problem. The solution gives the minimum distance between the end-effector and the prescribed point in the Cartesian space for a given orientation of the end-effector. The proposed inverse kinematic functions are of great interest for tracking, approach, and catching operations where the object to be reached or tracked by the manipulator is inside or outside the robot's workspace.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In [Gup90, GG92], we have presented a sequential framework that allows us to develop planners for manipulator arms with many degrees of freedom. The essence of this framework is to exploit the serial structure of manipulator arms and decompose the n- dimensional problem of planning collision-free motions for an n-link manipulator into a sequence of smaller m-dimensional sub-problems, each of which corresponds to planning the motion of a sub-group of m-1 links. In this paper, we present extensive experimental results within our sequential framework for a variety of manipulators with up to eight degree of freedom manipulators. Two main goals of these simulations are (1) to show the effectiveness of the sequential approach with the backtracking mechanism, and (2) to quantify the improvement of the backtracking mechanism and the trade-off between number of backtrackings and the execution time of the planner. Our experiments show that the sequential framework with the backtracking mechanism is quite efficient for manipulator arms with many degrees of freedom. For a given maximum backtracking level, the run time and memory requirements vary roughly linearly with the degrees of freedom. The planner succeeds for 91% of the examples in our simulations with a maximum backtracking level of 2. Typical run times for a six degree of freedom manipulator in quite cluttered environments are of the order of tens of minutes. Our sequential thus provides a framework within which practical motion planners for many degree of freedom manipulators can be developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A polynomial-neural-network-based (PNN-based) path planning with an obstacle avoidance scheme is proposed for mobile robot navigation. The PNN is a feature-based mapping neural network which can be successfully trained to interpolate an unknown function by observing few samples. In this work, a very useful method of data analysis technique called the group method of data handling (GMDH) is used to build the PNN. The built PNNs are used for the path planning of a sonar sensor guided mobile robot. The major advantage of using the PNNs is to efficiently use the environment data and to reduce the computational complexity. Also, in this approach, no preprocessing of range data is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A small mobile robot can be of great use in exploring environments, maneuvering through dangerous areas, identifying and tracking objects, and carrying cargo. Current methods of planning for robots focus on heavy on-board processing making use of multiple goals, learning, and failure recovery, or they focus on using very little on-board power running small reactive plans. We describe a method that makes use of both types of planning. While an on- board processor can generate small reactive plans for one particular goal, an off-site computer can perform goal management and learn from the robot's failures and successes to modify the rule base for the robot's future plans. This paper describes these ideas and illustrates their use on a T1 mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Forms of all types are used in businesses and government agencies and most of them are filled in by hand. Yet much time and effort has been expended to automate form-filling by programming specific systems on computers. The high cost of programmers and other resources prohibits many organizations from benefitting from efficient office automation. A learning apprentice can be used for such repetitious form-filling tasks. In this paper, we establish the need for learning apprentices, describe a framework for such a system, explain the difficulties of form-filling, and present empirical results of a form-filling system used in our department from September 1991 to April 1992. The form-filling apprentice saves up to 84% in keystroke effort and correctly predicts nearly 90% of the values on the form.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automated analysis of ocular fundus images allows ophthalmologists to diagnose or prognosticate diseases more effectively based on reliable fundus evaluations. This paper presents a computer method to identifying one of the most important features in fundus images, the retinal vascular network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automation of plant-micropropagation is necessary to produce high amounts of biomass. Plants have to be dissected on particular cutting-points. A vision-system is needed for the recognition of the cutting-points on the plants. With this background, this contribution is directed to the underlying formalism to determine cutting-points on abstract-plant models. We show the usefulness of pattern recognition by graph-rewriting along with some examples in this context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of objects in the real world can be described as surfaces of revolution. These are a particular type of generalized cylinder with a straight axis whose 3-D shape is formed by rotating a 2-D plane about the axis. Examples of such objects are vases, many chess pieces, light bulbs, table lamps, etc. This paper describes a number of techniques that can be used to recognize this class of object in a typical cluttered scene under perspective projection. Use is made of the symmetry of the occluding boundary, perceptual grouping of ellipses, 3-D models and the hypothesis that an ellipse is a circle in the real world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
HIPS-2 is a set of image processing modules which provides a powerful suite of tools for those interested in research, system development, and teaching. In this paper we first review the design considerations used to develop HIPS-2 from its predecessor (HIPS). Then, a number of further developments are outlined which are under consideration. These include the development of a graphical user interface, more general approaches to color from the standpoint of color reproduction and linear models of color, and extensions to the software and data format to deal with production-oriented tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robot skill is an ability of a robot to repeatably accomplish any useful action that can be described unambiguously to a production engineer in English, and can be described formally to a systems programmer using a skill template. A robot operation is created on-line by parameterizing and connecting robot skills. The on-line interface is iconic, with each icon representing a single skill or a complex icon that represents an abstraction of two or more skill icons. The objective of modeling robot operations as a sequence of skills is to reduce the cost of programming robot systems. This is achieved by separating the programming responsibilities of the application specialist and the systems programmer. The application specialist who works on the shop floor is able to create robot programs without being concerned with the low level programming to control sensors and devices. A computational paradigm for creating and maintaining the robot skills is presented. This is the underlying software architecture which enables the creation of a shop floor interface. It is an object-based paradigm designed for the abstraction of the low level functions of the sensors and machine controllers. The skills are defined as template objects with a list of attributes to completely specify a sensor-based robot action. The other objects include sensor drivers, virtual sensors, and machine drivers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.