<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lynton Ardizzone</style></author><author><style face="normal" font="default" size="100%">Mackowiak, Radek</style></author><author><style face="normal" font="default" size="100%">Carsten Rother</style></author><author><style face="normal" font="default" size="100%">Ullrich Köthe</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Exact Information Bottleneck with Invertible Neural Networks: Getting the Best of Discriminative and Generative Modeling</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year><pub-dates><date><style  face="normal" font="default" size="100%">jan</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://arxiv.org/abs/2001.06448</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Generative models are more informative about underlying phenomena than discriminative ones and offer superior uncertainty quantification and out-of-distribution robustness. However, these advantages often come at the expense of reduced classification accuracy. The Information Bottleneck objective (IB) formulates this trade-off in a clean information-theoretic way, but its practical application is hampered by a lack of accurate high-dimensional estimators of mutual information (MI), its main constituent. To overcome this limitation, we develop the theory and methodology of IB-INNs, which optimize the IB objective by means of Invertible Neural Networks (INNs), without the need for approximations of MI. Our experiments show that IB-INNs allow for a precise adjustment of the generative/discriminative trade-off: They learn accurate models of the class conditional likelihoods, generalize well to unseen data and reliably detect out-of-distribution examples, while at the same time exhibiting classification accuracy close to purely discriminative feed-forward networks.</style></abstract></record></records></xml>