<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Titus Leistner</style></author><author><style face="normal" font="default" size="100%">Schilling, Hendrik</style></author><author><style face="normal" font="default" size="100%">Mackowiak, Radek</style></author><author><style face="normal" font="default" size="100%">Gumhold, Stefan</style></author><author><style face="normal" font="default" size="100%">Carsten Rother</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Learning to Think Outside the Box: Wide-Baseline Light Field Depth Estimation with EPI-Shift</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings - 2019 International Conference on 3D Vision, 3DV 2019</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Computer vision</style></keyword><keyword><style  face="normal" font="default" size="100%">deep learning</style></keyword><keyword><style  face="normal" font="default" size="100%">depth estimation</style></keyword><keyword><style  face="normal" font="default" size="100%">light fields</style></keyword><keyword><style  face="normal" font="default" size="100%">Stereo</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">sep</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://arxiv.org/abs/1909.09059 http://dx.doi.org/10.1109/3DV.2019.00036</style></url></web-urls></urls><pages><style face="normal" font="default" size="100%">249–257</style></pages><isbn><style face="normal" font="default" size="100%">9781728131313</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We propose a method for depth estimation from light field data, based on a fully convolutional neural network architecture. Our goal is to design a pipeline which achieves highly accurate results for small-and wide-baseline light fields. Since light field training data is scarce, all learning-based approaches use a small receptive field and operate on small disparity ranges. In order to work with wide-baseline light fields, we introduce the idea of EPI-Shift: To virtually shift the light field stack which enables to retain a small receptive field, independent of the disparity range. In this way, our approach 'learns to think outside the box of the receptive field&quot;. Our network performs joint classification of integer disparities and regression of disparity-offsets. A U-Net component provides excellent long-range smoothing. EPI-Shift considerably outperforms the state-of-the-art learning-based approaches and is on par with hand-crafted methods. We demonstrate this on a publicly available, synthetic, small-baseline benchmark and on large-baseline real-world recordings.</style></abstract></record></records></xml>