Paper
26 June 2017 CPU architecture for a fast and energy-saving calculation of convolution neural networks
Florian J. Knoll, Michael Grelcke, Vitali Czymmek, Tim Holtorf, Stephan Hussmann
Author Affiliations +
Abstract
One of the most difficult problem in the use of artificial neural networks is the computational capacity. Although large search engine companies own specially developed hardware to provide the necessary computing power, for the conventional user only remains the state of the art method, which is the use of a graphic processing unit (GPU) as a computational basis. Although these processors are well suited for large matrix computations, they need massive energy. Therefore a new processor on the basis of a field programmable gate array (FPGA) has been developed and is optimized for the application of deep learning. This processor is presented in this paper. The processor can be adapted for a particular application (in this paper to an organic farming application). The power consumption is only a fraction of a GPU application and should therefore be well suited for energy-saving applications.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Florian J. Knoll, Michael Grelcke, Vitali Czymmek, Tim Holtorf, and Stephan Hussmann "CPU architecture for a fast and energy-saving calculation of convolution neural networks", Proc. SPIE 10334, Automated Visual Inspection and Machine Vision II, 103340P (26 June 2017); https://doi.org/10.1117/12.2270290
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Convolution

Field programmable gate arrays

Image processing

Neural networks

Data processing

Control systems

Digital signal processing

RELATED CONTENT


Back to Top