Skip to content

Latest commit

 

History

History
51 lines (51 loc) · 2 KB

2019-10-15-cherief-abdellatif19a.md

File metadata and controls

51 lines (51 loc) · 2 KB
title crossref abstract layout series id month tex_title firstpage lastpage page order cycles bibtex_author author date address publisher container-title volume genre issued pdf extras
A Generalization Bound for Online Variational Inference
acml19
Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even with model mismatch and adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference ? In this paper, we show that this is indeed the case for some variational inference (VI) algorithms. We consider a few existing online, tempered VI algorithms, as well as a new algorithm, and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that the result should hold more generally and present empirical evidence in support of this. Our work in this paper presents theoretical justifications in favor of online algorithms relying on approximate Bayesian methods.
inproceedings
Proceedings of Machine Learning Research
cherief-abdellatif19a
0
A Generalization Bound for Online Variational Inference
662
677
662-677
662
false
Ch\'erief-Abdellatif, Badr-Eddine and Alquier, Pierre and Khan, Mohammad Emtiyaz
given family
Badr-Eddine
Chérief-Abdellatif
given family
Pierre
Alquier
given family
Mohammad Emtiyaz
Khan
2019-10-15
PMLR
Proceedings of The Eleventh Asian Conference on Machine Learning
101
inproceedings
date-parts
2019
10
15