Paper
8 December 2015 Improving neural network performance on SIMD architectures
Author Affiliations +
Proceedings Volume 9875, Eighth International Conference on Machine Vision (ICMV 2015); 98750L (2015) https://doi.org/10.1117/12.2228594
Event: Eighth International Conference on Machine Vision, 2015, Barcelona, Spain
Abstract
Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Elena Limonova, Dmitry Ilin, and Dmitry Nikolaev "Improving neural network performance on SIMD architectures", Proc. SPIE 9875, Eighth International Conference on Machine Vision (ICMV 2015), 98750L (8 December 2015); https://doi.org/10.1117/12.2228594
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Convolutional neural networks

Data conversion

Matrix multiplication

Quantization

Basic research

Image transmission

RELATED CONTENT


Back to Top