Weakly Supervised Learning for Industrial Optical Inspection

In the following, we present a synthetic benchmark corpus for detect detection on statistically textured surfaces.We hope that it facilitates to further develop and benchmark classification algorithms for applications of industrial optical inspection. All data is publicly available and can be downloaded from this page.

Simulated Camera Data for EMVA 1288 Verification

This simulated data can serve to verify the methods and algorithms according to EMVA 1288.

Real Static Scene Image Sequences for the Evaluation of Structure From Motion Methods in an Automotive Context

This data set contains five real-life sequences recorded by a stereo camera setup mounted in a car moving through (almost) static everyday scenes. Their purpose is the evaluate structure from motion approaches in an automotive application. The recorded image data and rectified stereo image pairs are provided.

Farman Institute 3D Point Sets

This webpage is dedicated to a new type of data: high precision raw data coming from the acquisition of objects by a 3D laser scanner. This dataset was peer-reviewed by Image Processing On Line: Farman Institute 3D Point Sets.

Challenging Data for Stereo and Optical Flow

Selected scenes for stereo disparity and optical flow estimation containing yet unsolved challenges.

KinectFusion Capture Tool and Example Datasets

KinectFusion Datasets and Capture Tool mentioned in the IROS 2012 paper: When Can We Use KinectFusion for Ground Truth Acquisition?

The HCIBOX Depth Evaluation Dataset

This Dataset was created for tasks involving sensor fusion and depth camera evaluation and does consist of 3 calibrated stereo and ToF camera views with ground truth depth.

Prague Texture Segmentation Datagenerator and Benchmark

The Prague texture segmentation data-generator and benchmark is a web based ( service designed to mutually compare and rank different static or dynamic texture segmenters, and to support new supervised or unsupervised classification methods development. The benchmark verifies their performance characteristics on monospectral, multispectral, bidirectional texture function (BTF), satellite or dynamic texture data and enables to test their noise robustness, scale, and rotation or illumination invariance.

Synthezising Real World Stereo Challenges

On this page, we provide datasets discussed in the paper: Synthezising Real World Stereo Challenges. With these datasets, we aim at isolating specific challenges for stereo matchers. Previous synthetic datasets did not seperate different problematic issues in stereo analysis.

Simulation of Time-of-Flight Sensors using Global Illumination

Time-of-Flight (ToF) cameras use specialized sensors and modulated infrared light to simultaneously obtain depth, amplitude and intensity images. Depth images from such cameras suffer from various errors which exhibit a more complex behavior than traditional intensity images. Of these errors, the phenomenon of multi-reflection or multi-path interference poses the biggest challenge to researchers. It is caused by indirect light paths between camera and light source and is therefore dependent on scene geometry. While simulated data can be used for ground truth evaluation and whitebox testing, current simulators do not model multipath effects. The method we present is capa-ble of simulating all scene-dependant effects by taking global illumination into consideration. This is accomplished by modifying a bidirectional path tracing algorithm such that it takes the time-dependent propagation of modulated light in a scene into consideration. Furthermore, by combination of the proposed method with a previous hardware simulator we are capable of reproducing all effects in ToF cameras. The system was validated both on test targets with known real Time of Flight camera responses as well as qualitatively on a more complex room scene.