| Abstract | In contrast to traditional imaging, the higher dimensionality of a light field offers directional information about the
captured intensity. This information can be leveraged to estimate the disparity of 3D points in the captured scene.
A recent approach to estimate disparities analyzes the structure tensor and evaluates the orientation on epipolar
plane images (EPIs). While the resulting disparity maps are generally satisfying, the allowed disparity range is
small and occlusion boundaries can become smeared and noisy. In this paper, we first introduce an approach to
extend the total allowed disparity range. This allows for example the investigation of camera setups with a larger
baseline, like in the Middlebury 3D light fields. Second, we introduce a method to handle the difficulties arising at
boundaries between fore- and background objects to achieve sharper edge transitions. |