<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>27</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kruse, Jakob</style></author><author><style face="normal" font="default" size="100%">Lynton Ardizzone</style></author><author><style face="normal" font="default" size="100%">Carsten Rother</style></author><author><style face="normal" font="default" size="100%">Ullrich Köthe</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Benchmarking Invertible Architectures on Inverse Problems</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><number><style face="normal" font="default" size="100%">i</style></number><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Recent work demonstrated that flow-based invert-ible neural networks are promising tools for solving ambiguous inverse problems. Following up on this, we investigate how ten invertible archi-tectures and related models fare on two intuitive, low-dimensional benchmark problems, obtaining the best results with coupling layers and simple autoencoders. We hope that our initial efforts inspire other researchers to evaluate their invertible architectures in the same setting and put forth additional benchmarks, so our evaluation may eventually grow into an official community challenge.</style></abstract></record></records></xml>