Paper
1 July 1992 Sequence learning with recurrent networks: analysis of internal representations
Joydeep Ghosh, Vijay Karamcheti
Author Affiliations +
Abstract
The recognition and learning of temporal sequences is fundamental to cognitive processing. Several recurrent networks attempt to encode past history through feedback connections from `context units.' However, the internal representations formed by these networks is not well understood. In this paper, we use cluster analysis to interpret the hidden unit encodings formed when a network with context units is trained to recognize strings from a finite state machine. If the number of hidden units is small, the network forms fuzzy representations of the underlying machine states. With more hidden units, different representations may evolve for alternative paths to the same state. Thus, appropriate network size is indicated by the complexity of the underlying finite state machine. The analysis of internal representations can be used for modeling of an unknown system based on observation of its output sequences.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Joydeep Ghosh and Vijay Karamcheti "Sequence learning with recurrent networks: analysis of internal representations", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); https://doi.org/10.1117/12.140112
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Network architectures

Neural networks

Artificial neural networks

Reverse modeling

Network security

Systems modeling

Computer programming

Back to Top