Paper
31 July 2019 Group binary weight networks
Author Affiliations +
Proceedings Volume 11198, Fourth International Workshop on Pattern Recognition; 1119812 (2019) https://doi.org/10.1117/12.2540888
Event: Fourth International Workshop on Pattern Recognition, 2019, Nanjing, China
Abstract
In recent years, quantizing the weights of a deep neural network draws increasing attention in the area of network compression. An efficient and popular way to quantize the weight parameters is to replace a filter with the product of binary values and a real-valued scaling factor. However, the quantization error of such binarization method raises as the number of a filter's parameter increases. To reduce quantization error in existing network binarization methods, we propose group binary weight networks (GBWN), which divides the channels of each filter into groups and every channel in the same group shares the same scaling factor. We binarize the popular network architectures VGG, ResNet and DesneNet, and verify the performance on CIFAR10, CIFAR100, Fashion-MNIST, SVHN and ImageNet datasets. Experiment results show that GBWN achieves considerable accuracy increment compared to recent network binarization methods, including BinaryConnect, Binary Weight Networks and Stochastic Quantization Binary Weight Networks.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kailing Guo, Yicai Yang, Xiaofen Xing, and Xiangmin Xu "Group binary weight networks", Proc. SPIE 11198, Fourth International Workshop on Pattern Recognition, 1119812 (31 July 2019); https://doi.org/10.1117/12.2540888
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Binary data

Quantization

Network architectures

Convolution

Neural networks

Algorithm development

Convolutional neural networks

Back to Top