Residual neural processes

Byung Jun Lee, Seunghoon Hong, Kee Eung Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A Neural Process (NP) is a map from a set of observed input-output pairs to a predictive distribution over functions, which is designed to mimic other stochastic processes' inference mechanisms. NPs are shown to work effectively in tasks that require complex distributions, where traditional stochastic processes struggle, e.g. image completion tasks. This paper concerns the practical capacity of set function approximators despite their universality. By delving deeper into the relationship between an NP and a Bayesian last layer (BLL), it is possible to see that NPs may struggle in simple examples, which other stochastic processes can easily solve. In this paper, we propose a simple yet effective remedy; the Residual Neural Process (RNP) that leverages traditional BLL for faster training and better prediction. We demonstrate that the RNP shows faster convergence and better performance, both qualitatively and quantitatively.

Original languageEnglish
Title of host publicationAAAI 2020 - 34th AAAI Conference on Artificial Intelligence
PublisherAAAI press
Pages4545-4552
Number of pages8
ISBN (Electronic)9781577358350
Publication statusPublished - 2020
Externally publishedYes
Event34th AAAI Conference on Artificial Intelligence, AAAI 2020 - New York, United States
Duration: 2020 Feb 72020 Feb 12

Publication series

NameAAAI 2020 - 34th AAAI Conference on Artificial Intelligence

Conference

Conference34th AAAI Conference on Artificial Intelligence, AAAI 2020
Country/TerritoryUnited States
CityNew York
Period20/2/720/2/12

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Residual neural processes'. Together they form a unique fingerprint.

Cite this