Paper
1 July 1992 Deterministic learning theory and a parallel cascaded one-step learning machine
Author Affiliations +
Abstract
For a one-layered hard-limited perceptron, it is well known that if the training set given is not linearly separable in the state space, the machine just cannot learn no matter what learning method we use. This separability property is generally studied from a geometrical point of view. This paper reports the derivation of an algebraic criterion of the separability of a given mapping set. Then a one-step learning method is derived which will either instruct the machine to find the required weight matrix in one non-iterative step, or inform the teacher that the given mapping set is inseparable or not learnable no matter what learning rule is used. A parallelly cascaded two- layered perceptron is then derived which may surpass all these learning difficulties.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chia-Lun John Hu "Deterministic learning theory and a parallel cascaded one-step learning machine", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); https://doi.org/10.1117/12.140127
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Chlorine

Lithium

Artificial neural networks

Machine learning

Brain mapping

Mathematics

Neurons

Back to Top