TY - GEN
T1 - An Energy-Efficient SNN Processor Design based on Sparse Direct Feedback and Spike Prediction
AU - Bang, Seunghwan
AU - Lew, Dongwoo
AU - Choi, Sunghyun
AU - Park, Jongsun
N1 - Funding Information:
and 2 have a similar architecture, and only the number of input spikes processed in a cycle is different. Each hidden layer consists of the input spike buffer, prediction flag memory to store neuron states, and membrane update module (MUM) for computing membrane potentials and generating output spikes. The proposed spike prediction technique is supported by the prediction module located in MUM. Spike count memory, weight update module, and direct feedback connection generator (DFG) are also designed to support on-chip learning capability. The structure of the output layer is similar to hidden layers except DFG, as output error is directly used to update weights.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/18
Y1 - 2021/7/18
N2 - In this paper, we present a novel spike prediction technique based spiking neural network (SNN) architecture, which can provide low cost on-chip learning based on the sparse direct feedback alignment (DFA). First, in order to reduce the repetitive synaptic operations in feedforward operations, a spike prediction technique is proposed, where the output spikes of active and inactive neurons are predicted by tracing the membrane potential changes. The proposed spike prediction achieves 63.84% reduction of the synaptic operations, and it can be efficiently exploited in the training as well as inference process. In addition, the number of weight updates in backward operations has been reduced by applying sparse DFA. In the sparse DFA, the synaptic weight updates are computed using the sparse feedback connections and the output error that is sparse as well. As a result, the number of weight updates in the training process has been reduced to 65.17%. The SNN processor with the proposed spike prediction technique and the sparse DFA has been implemented using 65nm CMOS process. The implementation results show that the SNN processor achieves the training energy savings of 52.16% with 0.3% accuracy degradation in MNIST dataset. It also consumes 1.18 uJ/image and 1.34 uJ/image for inference and training, respectively, with 97.46% accuracy on MNIST dataset.
AB - In this paper, we present a novel spike prediction technique based spiking neural network (SNN) architecture, which can provide low cost on-chip learning based on the sparse direct feedback alignment (DFA). First, in order to reduce the repetitive synaptic operations in feedforward operations, a spike prediction technique is proposed, where the output spikes of active and inactive neurons are predicted by tracing the membrane potential changes. The proposed spike prediction achieves 63.84% reduction of the synaptic operations, and it can be efficiently exploited in the training as well as inference process. In addition, the number of weight updates in backward operations has been reduced by applying sparse DFA. In the sparse DFA, the synaptic weight updates are computed using the sparse feedback connections and the output error that is sparse as well. As a result, the number of weight updates in the training process has been reduced to 65.17%. The SNN processor with the proposed spike prediction technique and the sparse DFA has been implemented using 65nm CMOS process. The implementation results show that the SNN processor achieves the training energy savings of 52.16% with 0.3% accuracy degradation in MNIST dataset. It also consumes 1.18 uJ/image and 1.34 uJ/image for inference and training, respectively, with 97.46% accuracy on MNIST dataset.
KW - Spiking neural network
KW - energy-efficient neuromorphic system
KW - on-chip learning
KW - sparse direct feedback alignment
KW - spike prediction
UR - http://www.scopus.com/inward/record.url?scp=85116468301&partnerID=8YFLogxK
U2 - 10.1109/IJCNN52387.2021.9534107
DO - 10.1109/IJCNN52387.2021.9534107
M3 - Conference contribution
AN - SCOPUS:85116468301
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 International Joint Conference on Neural Networks, IJCNN 2021
Y2 - 18 July 2021 through 22 July 2021
ER -