TY - JOUR
T1 - Safe semi-supervised learning using a bayesian neural network
AU - Bae, Jinsoo
AU - Lee, Minjung
AU - Kim, Seoung Bum
N1 - Funding Information:
The authors would like to thank the editor and reviewers for their careful evaluation and helpful recommendations that significantly improved the quality of the paper. This research was supported by Agency for Defense Development (ADD) (No. UI2100062D, Technique Analysis and Model Prototyping for the Capability Evaluation and Weapon Correlation of Friend and Foe) as a part of AI - Command Decision Support for Future Ground Operations (AICDS).
Publisher Copyright:
© 2022 Elsevier Inc.
PY - 2022/10
Y1 - 2022/10
N2 - Semi-supervised learning attempts to use a large set of unlabeled data to increase the prediction accuracy of machine learning models when the amount of labeled data is limited. However, in realistic cases, unlabeled data may worsen performance because they contain out-of-distribution (OOD) data that differ from the labeled data. To address this issue, safe semi-supervised deep learning has recently been presented. This study suggests a new safe semi-supervised algorithm that uses an uncertainty-aware Bayesian neural network. Our proposed method, safe uncertainty-based consistency training (SafeUC), uses Bayesian uncertainty to minimize the harmful effects caused by unlabeled OOD examples. The proposed method improves the model's generalization performance by regularizing the network for consistency against uncertain noise. Moreover, to avoid uncertain prediction results, the proposed method includes a practical inference tip based on a well-calibrated uncertainty. The effectiveness of the proposed method is demonstrated in the experimental results on CIFAR-10 and SVHN by showing that it achieved state-of-the-art performance for all semi-supervised learning tasks with OOD data presence rates.
AB - Semi-supervised learning attempts to use a large set of unlabeled data to increase the prediction accuracy of machine learning models when the amount of labeled data is limited. However, in realistic cases, unlabeled data may worsen performance because they contain out-of-distribution (OOD) data that differ from the labeled data. To address this issue, safe semi-supervised deep learning has recently been presented. This study suggests a new safe semi-supervised algorithm that uses an uncertainty-aware Bayesian neural network. Our proposed method, safe uncertainty-based consistency training (SafeUC), uses Bayesian uncertainty to minimize the harmful effects caused by unlabeled OOD examples. The proposed method improves the model's generalization performance by regularizing the network for consistency against uncertain noise. Moreover, to avoid uncertain prediction results, the proposed method includes a practical inference tip based on a well-calibrated uncertainty. The effectiveness of the proposed method is demonstrated in the experimental results on CIFAR-10 and SVHN by showing that it achieved state-of-the-art performance for all semi-supervised learning tasks with OOD data presence rates.
KW - Bayesian neural network
KW - Consistency regularization
KW - Out-of-distribution
KW - Safe semi-supervised deep learning
KW - Uncertain noise
KW - Uncertainty
UR - http://www.scopus.com/inward/record.url?scp=85137176178&partnerID=8YFLogxK
U2 - 10.1016/j.ins.2022.08.094
DO - 10.1016/j.ins.2022.08.094
M3 - Article
AN - SCOPUS:85137176178
VL - 612
SP - 453
EP - 464
JO - Information Sciences
JF - Information Sciences
SN - 0020-0255
ER -