We present a mathematical model of interacting neuron- like units that we call Recurrent Feedback Neuronal Networks
(RFNN). Our model is closer to biological neural networks than current approaches (e.g. Layered Neural Networks,
Perceptron, etc.). Classification and reasoning in RFNN are accomplished by an iterative algorithm, and learning
changes only structure (weights are fixed in RFNN). Thus it emphasizes network structure over edge weights. RFNNs
are more flexible and scalable than previous approaches. In particular, integration of a new node can affect the outcome
of existing nodes without modifying their prior structure. RFNN can produce informative responses to partial inputs or
when the networks are extended to other tasks. It also enables recognition of complex entities (e.g. images) from parts.
This new model is promising for future contributions to integrated human-level intelligent applications due to its
flexibility, dynamics and structural similarity to natural neuronal networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.