Abstract
Deep learning has shown outstanding performance in various fields, and it is increasingly deployed in privacy-critical domains. If sensitive data in the deep learning model are exposed, it can cause serious privacy threats. To protect individual privacy, we propose a novel activation function and stochastic gradient descent for applying differential privacy to deep learning. Through experiments, we show that the proposed method can effectively protect the privacy and the performance of proposed method is better than the previous approaches.
Original language | English |
---|---|
Pages (from-to) | 905-908 |
Number of pages | 4 |
Journal | IEICE Transactions on Information and Systems |
Volume | 104 |
Issue number | 6 |
DOIs | |
Publication status | Published - 2021 |
Keywords
- Activation function
- Deep learning
- Differential privacy
ASJC Scopus subject areas
- Software
- Hardware and Architecture
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering
- Artificial Intelligence