KEYWORDS: Feature selection, Neural networks, Principal component analysis, Neurons, Process modeling, Visualization, Transform theory, Data compression, Matrices, Data processing
In this paper the idea of deep learning classifier is developed. The effectiveness of discriminative classifier, as e.g. multilayer perceptron, support vector machine can be improved by adding the data preprocessing blocks: orthogonal feature selection (Gram-Schmidt method) and nonlinear principal component analysis. We present the case study of various structures of deep learning systems (scenarios).
KEYWORDS: Data modeling, Neural networks, Neurons, Statistical modeling, Image processing, Error analysis, Process modeling, Computer programming, Systems modeling, Data processing
This article presents a novel combination of the Recursive Auto-Associative Memory model with the Sensitivity-
Based Linear Learning Method. Training results on the syntactic trees dataset are presented, confirming that
the application of the SBLLM method to the RAAM model results in very fast learning and yields clustering
results of the same quality as the original RAAM model.
KEYWORDS: Neural networks, Data modeling, Computer programming, Neurons, Detection and tracking algorithms, Process modeling, Systems modeling, Data processing, Image processing, Data conversion
This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.
This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers.
SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such
as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the
SPDE algorithm to the task of selecting classifiers with best performance measures values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.