Paper
12 May 2016 Optimized hardware framework of MLP with random hidden layers for classification applications
Author Affiliations +
Abstract
Multilayer Perceptron Networks with random hidden layers are very efficient at automatic feature extraction and offer significant performance improvements in the training process. They essentially employ large collection of fixed, random features, and are expedient for form-factor constrained embedded platforms. In this work, a reconfigurable and scalable architecture is proposed for the MLPs with random hidden layers with a customized building block based on CORDIC algorithm. The proposed architecture also exploits fixed point operations for area efficiency. The design is validated for classification on two different datasets. An accuracy of ~ 90% for MNIST dataset and 75% for gender classification on LFW dataset was observed. The hardware has 299 speed-up over the corresponding software realization.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Abdullah M. Zyarah, Abhishek Ramesh, Cory Merkel, and Dhireesha Kudithipudi "Optimized hardware framework of MLP with random hidden layers for classification applications", Proc. SPIE 9850, Machine Intelligence and Bio-inspired Computation: Theory and Applications X, 985007 (12 May 2016); https://doi.org/10.1117/12.2225498
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Network architectures

Databases

MATLAB

Electroluminescent displays

Mathematical modeling

Neodymium

Back to Top