TY - JOUR
T1 - Early Termination Based Training Acceleration for an Energy-Efficient SNN Processor Design
AU - Choi, Sunghyun
AU - Lew, Dongwoo
AU - Park, Jongsun
N1 - Funding Information:
This work was supported in part by the National Research Foundation of Korea grant funded by the Korea Government under Grant NRF-2020R1A2C3014820. The EDA tool was supported by the IC Design Education Center (IDEC), Korea.
Publisher Copyright:
© 2007-2012 IEEE.
PY - 2022/6/1
Y1 - 2022/6/1
N2 - In this paper, we present a novel early termination based training acceleration technique for temporal coding based spiking neural network (SNN) processor design. The proposed early termination scheme can efficiently identify the non-contributing training images during the training's feedforward process, and it skips the rest of the processes to save training energy and time. A metric to evaluate each input image's contribution to training has been developed, and it is compared with pre-determined threshold to decide whether to skip the rest of the training process. For the threshold selection, an adaptive threshold calculation method is presented to increase the computation skip ratio without sacrificing accuracy. Timestep splitting approach is also employed to allow more frequent early termination in split timesteps, thus leading to more computation savings. The proposed early termination and timestep splitting techniques achieve 51.21/42.31/93.53/30.36% reduction of synaptic operations and 86.06/64.63/90.82/49.14% reduction of feedforward timestep for the training process on MNIST/Fashion-MNIST/ETH-80/EMNIST-Letters dataset, respectively. The hardware implementation of the proposed SNN processor using 28 nm CMOS process shows that the SNN processor achieves the training energy saving of 61.76/31.88% and computation cycle reduction of 69.10/36.26% on MNIST/Fashion-MNIST dataset, respectively.
AB - In this paper, we present a novel early termination based training acceleration technique for temporal coding based spiking neural network (SNN) processor design. The proposed early termination scheme can efficiently identify the non-contributing training images during the training's feedforward process, and it skips the rest of the processes to save training energy and time. A metric to evaluate each input image's contribution to training has been developed, and it is compared with pre-determined threshold to decide whether to skip the rest of the training process. For the threshold selection, an adaptive threshold calculation method is presented to increase the computation skip ratio without sacrificing accuracy. Timestep splitting approach is also employed to allow more frequent early termination in split timesteps, thus leading to more computation savings. The proposed early termination and timestep splitting techniques achieve 51.21/42.31/93.53/30.36% reduction of synaptic operations and 86.06/64.63/90.82/49.14% reduction of feedforward timestep for the training process on MNIST/Fashion-MNIST/ETH-80/EMNIST-Letters dataset, respectively. The hardware implementation of the proposed SNN processor using 28 nm CMOS process shows that the SNN processor achieves the training energy saving of 61.76/31.88% and computation cycle reduction of 69.10/36.26% on MNIST/Fashion-MNIST dataset, respectively.
KW - Energy-efficient neuromorphic system
KW - on-chip learning
KW - spiking neural network
KW - temporal coding
UR - http://www.scopus.com/inward/record.url?scp=85132753986&partnerID=8YFLogxK
U2 - 10.1109/TBCAS.2022.3181808
DO - 10.1109/TBCAS.2022.3181808
M3 - Article
C2 - 35687615
AN - SCOPUS:85132753986
VL - 16
SP - 442
EP - 455
JO - IEEE Transactions on Biomedical Circuits and Systems
JF - IEEE Transactions on Biomedical Circuits and Systems
SN - 1932-4545
IS - 3
ER -