TY - GEN
T1 - Optimization Techniques for Conversion of Quantization Aware Trained Deep Neural Networks to Lightweight Spiking Neural Networks
AU - Lee, Kyungchul
AU - Choi, Sunghyun
AU - Lew, Dongwoo
AU - Park, Jongsun
N1 - Funding Information:
This work was supported in part by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2021-2018-0-01433) supervised by the IITP(Institute for Information & communications Technology Promotion), and in part by the Industrial Strategic Technology Development Program(10077445, Development of SoC technology based on Spiking Neural Cell for smart mobile and IoT Devices) funded By the Ministry of Trade, Industry & Energy(MOTIE, Korea)
Publisher Copyright:
© 2021 IEEE.
PY - 2021/6/27
Y1 - 2021/6/27
N2 - In this paper, we present spiking neural network (SNN) conversion technique optimized for converting low bit-width artificial neural networks (ANN) trained with quantization aware training (QAT). Conventional conversion technique suffers significant accuracy drop on QAT ANNs due to different activation function used for QAT ANNs. To minimize such accuracy drop of the conventional conversion, the proposed technique uses Spike-Norm Skip, which selectively applies threshold balancing. In addition, subtraction based reset is used to further reduce accuracy degradation. The proposed conversion technique achieves an accuracy of 89.92% (0.68% drop) with a 5-bit weight on CIFAR-10 using VGG-16.
AB - In this paper, we present spiking neural network (SNN) conversion technique optimized for converting low bit-width artificial neural networks (ANN) trained with quantization aware training (QAT). Conventional conversion technique suffers significant accuracy drop on QAT ANNs due to different activation function used for QAT ANNs. To minimize such accuracy drop of the conventional conversion, the proposed technique uses Spike-Norm Skip, which selectively applies threshold balancing. In addition, subtraction based reset is used to further reduce accuracy degradation. The proposed conversion technique achieves an accuracy of 89.92% (0.68% drop) with a 5-bit weight on CIFAR-10 using VGG-16.
KW - ANN-SNN conversion
KW - quantization aware training
KW - spiking neural networks
UR - http://www.scopus.com/inward/record.url?scp=85113973664&partnerID=8YFLogxK
U2 - 10.1109/ITC-CSCC52171.2021.9501427
DO - 10.1109/ITC-CSCC52171.2021.9501427
M3 - Conference contribution
AN - SCOPUS:85113973664
T3 - 2021 36th International Technical Conference on Circuits/Systems, Computers and Communications, ITC-CSCC 2021
BT - 2021 36th International Technical Conference on Circuits/Systems, Computers and Communications, ITC-CSCC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 36th International Technical Conference on Circuits/Systems, Computers and Communications, ITC-CSCC 2021
Y2 - 27 June 2021 through 30 June 2021
ER -