Recently, the notion of "Light Fields" (LF) has become a very useful tool for the parameterization and theoretical analysis of problems at the intersection of the research areas Computer Graphics, Computational Photography and Computer Vision. In this context, HCI research is focusing on several related topics, such as LF theory and simulation, LF recording with camera arrays and plenoptic cameras as well as LF analysis for inspection tasks.
Light Field Analysis
Alternating natural illumination causes strong variations in appearance of common object surfaces having non-uniform reflectance properties. This problem is of great practical relevance since stable (robust) feature matches are the basis for a wide range of applications, such as (stereo) depth estimation and optical-flow computation. The Light Field of a scene implicitly contains the bidirectional reflectance information as well as the 3D geometry of an object. With this additional information, we are confident of solving the feature matching problem even under difficult lighting conditions. The LF also implicitly contains multi-angle views of a scene, which can be used for novel inspection scenarios where we plan to detect defects directly in the LF representation. more
Light Fields and Active Illumination
So far, Light Fields are acquired by letting the lighting be constant. The concept of Non-Local Reflectance Fields (NLRF) incorporates an additional parametrization to the state of the (non-local) illumination environment. This leads us to following interesting questions: How to place cameras and light sources such that properties of interest (like deviations in shape or reflectance) can be detected and quantified robustly? Can we "learn" observer/lighting positions?