Paper
1 September 1993 Filtered kernel probabilistic neural network
George W. Rogers, Carey E. Priebe, Jeffrey L. Solka
Author Affiliations +
Abstract
Probabilistic neural networks (PNN) build internal density representations based on the kernel or Parzen estimator and use Bayesian decision theory in order to build up arbitrarily complex decision boundaries. As in the classical kernel estimator, the training is performed in a single pass of the data and asymptotic convergence is guaranteed. One important factor affecting convergence is the kernel width. Theory only provides an optimal width in the case of normally distributed data. This problem becomes acute in multivariate cases. In this paper we present an asymptotically optimal method of setting kernel widths for multivariate Gaussian kernels based on the theory of filtered kernel estimators and show how this can be realized as a filtered kernel PNN architecture. Performance comparisons are made with competing methods.
© (1993) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
George W. Rogers, Carey E. Priebe, and Jeffrey L. Solka "Filtered kernel probabilistic neural network", Proc. SPIE 1962, Adaptive and Learning Systems II, (1 September 1993); https://doi.org/10.1117/12.150592
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Data modeling

Error analysis

Statistical analysis

Gaussian filters

Classification systems

Probability theory

RELATED CONTENT


Back to Top