TY - JOUR
T1 - VAPER
T2 - A deep learning model for explainable probabilistic regression
AU - Jung, Seungwon
AU - Noh, Yoona
AU - Moon, Jaeuk
AU - Hwang, Eenjun
N1 - Funding Information:
This work was supported by Korea Environment Industry & Technology Institute (KEITI) through the Exotic Invasive Species Management Program, funded by the Korea Ministry of Environment (MOE) ( 2021002280004 ), and in part by the Energy Cloud R&D Program (Grant number: 2019M3F2A1073184 ) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT.
Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/9
Y1 - 2022/9
N2 - A probabilistic regression model provides decision-makers with the regression output along with its quantitative uncertainty for given input variables. Even though this uncertainty may help to avoid serious consequences such as misdiagnosis or blackout due to overconfidence in the output, it only provides a measure of the uncertainty of the output, and has a limitation in that it cannot explain the reasons for the output and its uncertainty. If they can be presented along with their reasons, more suitable alternatives to the output can be found. However, despite the development of artificial intelligence methods to explain machine learning models and their outputs, few probabilistic regression models with this functionality have been proposed. Therefore, in this paper, we propose a variational autoencoder-based model for explainable probabilistic regression, called VAPER. VAPER provides a parametric probability distribution of an output variable over input variables and interprets it using layer-wise relevance propagation to investigate the effect of each input variable. To evaluate the effectiveness of the proposed model, we performed extensive experiments using several datasets. The experimental results demonstrated that VAPER has competitive regression performance compared to existing models, even with effective explainability.
AB - A probabilistic regression model provides decision-makers with the regression output along with its quantitative uncertainty for given input variables. Even though this uncertainty may help to avoid serious consequences such as misdiagnosis or blackout due to overconfidence in the output, it only provides a measure of the uncertainty of the output, and has a limitation in that it cannot explain the reasons for the output and its uncertainty. If they can be presented along with their reasons, more suitable alternatives to the output can be found. However, despite the development of artificial intelligence methods to explain machine learning models and their outputs, few probabilistic regression models with this functionality have been proposed. Therefore, in this paper, we propose a variational autoencoder-based model for explainable probabilistic regression, called VAPER. VAPER provides a parametric probability distribution of an output variable over input variables and interprets it using layer-wise relevance propagation to investigate the effect of each input variable. To evaluate the effectiveness of the proposed model, we performed extensive experiments using several datasets. The experimental results demonstrated that VAPER has competitive regression performance compared to existing models, even with effective explainability.
KW - Explainable artificial intelligence
KW - Layer-wise relevance propagation
KW - Probabilistic regression
KW - Variational autoencoder
UR - http://www.scopus.com/inward/record.url?scp=85135969999&partnerID=8YFLogxK
U2 - 10.1016/j.jocs.2022.101824
DO - 10.1016/j.jocs.2022.101824
M3 - Article
AN - SCOPUS:85135969999
VL - 63
JO - Journal of Computational Science
JF - Journal of Computational Science
SN - 1877-7503
M1 - 101824
ER -