Paper
2 September 1993 Performance aspects of mapping neural networks onto a massively parallel SIMD computer
Andreas Zell, Michael C. Vogt, Niels Mache, Markus Huttel
Author Affiliations +
Abstract
In this paper we present and compare three different massively parallel implementations of multilayer feedforward neural networks on a MasPar MP-1216, a parallel SIMD computer with 16,384 processors. For multilayer feedforward networks we have obtained sustained rates of up to 348 MCPS and 129 MCUPS with backpropagation, a high mark for general purpose SIMD computers. After a brief introduction to SNNS, the paper first focuses on the problems of mapping neural networks to parallel hardware. Different aspects of parallelism are presented. Two combinations of unit and training pattern parallelism were implemented as well as link and training pattern parallelism. We describe the implementation problems in obtaining high propagation rates on a SIMD machine and problems with the resulting learning algorithms in general.
© (1993) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Andreas Zell, Michael C. Vogt, Niels Mache, and Markus Huttel "Performance aspects of mapping neural networks onto a massively parallel SIMD computer", Proc. SPIE 1965, Applications of Artificial Neural Networks IV, (2 September 1993); https://doi.org/10.1117/12.152537
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Neural networks

Human-machine interfaces

Artificial neural networks

Evolutionary algorithms

Microchannel plates

Computer simulations

RELATED CONTENT


Back to Top