Paper
1 April 1991 Aspect networks: using multiple views to learn and recognize 3-D objects
Michael Seibert, Allen M. Waxman
Author Affiliations +
Proceedings Volume 1383, Sensor Fusion III: 3D Perception and Recognition; (1991) https://doi.org/10.1117/12.25240
Event: Advances in Intelligent Robotics Systems, 1990, Boston, MA, United States
Abstract
This paper addresses the problem of generating models of 3D objects automatically from exploratory view-sequences of the objects. Neural network techniques are described which cluster the frames of video-sequences into view-categories, called aspects, representing the 2D characteristic views. Feedforward processes insure that each aspect is invariant to the apparent position, size, orientation, and foreshortening of an object in the scene. The aspects are processed in conjunction with their associated aspect-transitions by the Aspect Network to learn and refine the 3D object representations on-the-fly. Recognition is indicated by the object-hypothesis which has accumulated the maximum evidence. The object-hypothesis must be'consistent with the current view, as well as the recent history of view transitions stored in the Aspect Network. The “winning” object refines its representation until either the attention of the camera is redirected or another hypothesis accumulates greater evidence.
© (1991) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Michael Seibert and Allen M. Waxman "Aspect networks: using multiple views to learn and recognize 3-D objects", Proc. SPIE 1383, Sensor Fusion III: 3D Perception and Recognition, (1 April 1991); https://doi.org/10.1117/12.25240
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

Sensor fusion

Neurons

Object recognition

Cameras

Computer aided design

Head

Back to Top